Jan 20 13:55:51.161668 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jan 20 13:55:51.161685 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Jan 20 12:20:43 -00 2026 Jan 20 13:55:51.161692 kernel: KASLR enabled Jan 20 13:55:51.161697 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 20 13:55:51.161701 kernel: printk: legacy bootconsole [pl11] enabled Jan 20 13:55:51.161705 kernel: efi: EFI v2.7 by EDK II Jan 20 13:55:51.161711 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89c018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Jan 20 13:55:51.161715 kernel: random: crng init done Jan 20 13:55:51.161719 kernel: secureboot: Secure boot disabled Jan 20 13:55:51.161723 kernel: ACPI: Early table checksum verification disabled Jan 20 13:55:51.161728 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Jan 20 13:55:51.161732 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 13:55:51.161736 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 13:55:51.161741 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 20 13:55:51.161747 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 13:55:51.161751 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 13:55:51.161756 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 13:55:51.161761 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 13:55:51.161766 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 13:55:51.161770 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 13:55:51.161774 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 20 13:55:51.161779 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 13:55:51.161783 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 20 13:55:51.161788 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 20 13:55:51.161792 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 20 13:55:51.161797 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jan 20 13:55:51.161801 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jan 20 13:55:51.161806 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 20 13:55:51.161811 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 20 13:55:51.161815 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 20 13:55:51.161820 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 20 13:55:51.161824 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 20 13:55:51.161828 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 20 13:55:51.161833 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 20 13:55:51.161837 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 20 13:55:51.161842 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 20 13:55:51.161846 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jan 20 13:55:51.161851 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Jan 20 13:55:51.161856 kernel: Zone ranges: Jan 20 13:55:51.161861 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 20 13:55:51.161867 kernel: DMA32 empty Jan 20 13:55:51.161872 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 20 13:55:51.161876 kernel: Device empty Jan 20 13:55:51.161882 kernel: Movable zone start for each node Jan 20 13:55:51.161886 kernel: Early memory node ranges Jan 20 13:55:51.161891 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 20 13:55:51.161896 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Jan 20 13:55:51.161900 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Jan 20 13:55:51.161905 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Jan 20 13:55:51.161910 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Jan 20 13:55:51.161914 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Jan 20 13:55:51.161919 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 20 13:55:51.161924 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 20 13:55:51.161929 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 20 13:55:51.161934 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Jan 20 13:55:51.161938 kernel: psci: probing for conduit method from ACPI. Jan 20 13:55:51.161943 kernel: psci: PSCIv1.3 detected in firmware. Jan 20 13:55:51.161948 kernel: psci: Using standard PSCI v0.2 function IDs Jan 20 13:55:51.161952 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 20 13:55:51.161957 kernel: psci: SMC Calling Convention v1.4 Jan 20 13:55:51.161962 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 20 13:55:51.161966 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 20 13:55:51.161971 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 20 13:55:51.161976 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 20 13:55:51.161981 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 20 13:55:51.161986 kernel: Detected PIPT I-cache on CPU0 Jan 20 13:55:51.161991 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jan 20 13:55:51.161995 kernel: CPU features: detected: GIC system register CPU interface Jan 20 13:55:51.162000 kernel: CPU features: detected: Spectre-v4 Jan 20 13:55:51.162005 kernel: CPU features: detected: Spectre-BHB Jan 20 13:55:51.162009 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 20 13:55:51.162014 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 20 13:55:51.162019 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jan 20 13:55:51.162024 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 20 13:55:51.162029 kernel: alternatives: applying boot alternatives Jan 20 13:55:51.162035 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=46cab7ab51efe59ae17cc7b7aabd508bb3cc66ebfc63b9c99ab53383296c1454 Jan 20 13:55:51.162040 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 13:55:51.162044 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 13:55:51.162049 kernel: Fallback order for Node 0: 0 Jan 20 13:55:51.162054 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jan 20 13:55:51.162058 kernel: Policy zone: Normal Jan 20 13:55:51.162063 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 13:55:51.162068 kernel: software IO TLB: area num 2. Jan 20 13:55:51.162072 kernel: software IO TLB: mapped [mem 0x0000000037360000-0x000000003b360000] (64MB) Jan 20 13:55:51.162077 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 20 13:55:51.162083 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 13:55:51.162088 kernel: rcu: RCU event tracing is enabled. Jan 20 13:55:51.162093 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 20 13:55:51.162097 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 13:55:51.162102 kernel: Tracing variant of Tasks RCU enabled. Jan 20 13:55:51.162107 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 13:55:51.162112 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 20 13:55:51.162116 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 13:55:51.162121 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 13:55:51.162126 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 20 13:55:51.162130 kernel: GICv3: 960 SPIs implemented Jan 20 13:55:51.162136 kernel: GICv3: 0 Extended SPIs implemented Jan 20 13:55:51.162140 kernel: Root IRQ handler: gic_handle_irq Jan 20 13:55:51.162145 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 20 13:55:51.162150 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jan 20 13:55:51.162155 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 20 13:55:51.162159 kernel: ITS: No ITS available, not enabling LPIs Jan 20 13:55:51.162164 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 13:55:51.162169 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jan 20 13:55:51.162173 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 13:55:51.162178 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jan 20 13:55:51.162183 kernel: Console: colour dummy device 80x25 Jan 20 13:55:51.162189 kernel: printk: legacy console [tty1] enabled Jan 20 13:55:51.162194 kernel: ACPI: Core revision 20240827 Jan 20 13:55:51.162199 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jan 20 13:55:51.162204 kernel: pid_max: default: 32768 minimum: 301 Jan 20 13:55:51.162209 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 13:55:51.162214 kernel: landlock: Up and running. Jan 20 13:55:51.162219 kernel: SELinux: Initializing. Jan 20 13:55:51.162225 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 13:55:51.162230 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 13:55:51.162235 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Jan 20 13:55:51.162240 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Jan 20 13:55:51.162248 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 20 13:55:51.162254 kernel: rcu: Hierarchical SRCU implementation. Jan 20 13:55:51.162259 kernel: rcu: Max phase no-delay instances is 400. Jan 20 13:55:51.162264 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 13:55:51.162269 kernel: Remapping and enabling EFI services. Jan 20 13:55:51.162275 kernel: smp: Bringing up secondary CPUs ... Jan 20 13:55:51.162280 kernel: Detected PIPT I-cache on CPU1 Jan 20 13:55:51.162285 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 20 13:55:51.162290 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jan 20 13:55:51.162296 kernel: smp: Brought up 1 node, 2 CPUs Jan 20 13:55:51.162301 kernel: SMP: Total of 2 processors activated. Jan 20 13:55:51.162307 kernel: CPU: All CPU(s) started at EL1 Jan 20 13:55:51.162312 kernel: CPU features: detected: 32-bit EL0 Support Jan 20 13:55:51.162317 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 20 13:55:51.162322 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 20 13:55:51.162327 kernel: CPU features: detected: Common not Private translations Jan 20 13:55:51.162333 kernel: CPU features: detected: CRC32 instructions Jan 20 13:55:51.162339 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jan 20 13:55:51.162344 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 20 13:55:51.162349 kernel: CPU features: detected: LSE atomic instructions Jan 20 13:55:51.162354 kernel: CPU features: detected: Privileged Access Never Jan 20 13:55:51.162359 kernel: CPU features: detected: Speculation barrier (SB) Jan 20 13:55:51.162364 kernel: CPU features: detected: TLB range maintenance instructions Jan 20 13:55:51.162370 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 20 13:55:51.162375 kernel: CPU features: detected: Scalable Vector Extension Jan 20 13:55:51.162381 kernel: alternatives: applying system-wide alternatives Jan 20 13:55:51.162396 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 20 13:55:51.162401 kernel: SVE: maximum available vector length 16 bytes per vector Jan 20 13:55:51.162406 kernel: SVE: default vector length 16 bytes per vector Jan 20 13:55:51.162412 kernel: Memory: 3979836K/4194160K available (11200K kernel code, 2458K rwdata, 9092K rodata, 12480K init, 1038K bss, 193136K reserved, 16384K cma-reserved) Jan 20 13:55:51.162418 kernel: devtmpfs: initialized Jan 20 13:55:51.162423 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 13:55:51.162428 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 20 13:55:51.162434 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 20 13:55:51.162439 kernel: 0 pages in range for non-PLT usage Jan 20 13:55:51.162444 kernel: 515152 pages in range for PLT usage Jan 20 13:55:51.162449 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 13:55:51.162455 kernel: SMBIOS 3.1.0 present. Jan 20 13:55:51.162460 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Jan 20 13:55:51.162465 kernel: DMI: Memory slots populated: 2/2 Jan 20 13:55:51.162470 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 13:55:51.162476 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 20 13:55:51.162481 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 20 13:55:51.162486 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 20 13:55:51.162493 kernel: audit: initializing netlink subsys (disabled) Jan 20 13:55:51.162498 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jan 20 13:55:51.162503 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 13:55:51.162508 kernel: cpuidle: using governor menu Jan 20 13:55:51.162513 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 20 13:55:51.162519 kernel: ASID allocator initialised with 32768 entries Jan 20 13:55:51.162524 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 13:55:51.162529 kernel: Serial: AMBA PL011 UART driver Jan 20 13:55:51.162535 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 13:55:51.162540 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 13:55:51.162545 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 20 13:55:51.162550 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 20 13:55:51.162556 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 13:55:51.162561 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 13:55:51.162566 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 20 13:55:51.162572 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 20 13:55:51.162577 kernel: ACPI: Added _OSI(Module Device) Jan 20 13:55:51.162582 kernel: ACPI: Added _OSI(Processor Device) Jan 20 13:55:51.162587 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 13:55:51.162592 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 13:55:51.162598 kernel: ACPI: Interpreter enabled Jan 20 13:55:51.162603 kernel: ACPI: Using GIC for interrupt routing Jan 20 13:55:51.162609 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 20 13:55:51.162614 kernel: printk: legacy console [ttyAMA0] enabled Jan 20 13:55:51.162619 kernel: printk: legacy bootconsole [pl11] disabled Jan 20 13:55:51.162624 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 20 13:55:51.162629 kernel: ACPI: CPU0 has been hot-added Jan 20 13:55:51.162634 kernel: ACPI: CPU1 has been hot-added Jan 20 13:55:51.162640 kernel: iommu: Default domain type: Translated Jan 20 13:55:51.162646 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 20 13:55:51.162651 kernel: efivars: Registered efivars operations Jan 20 13:55:51.162656 kernel: vgaarb: loaded Jan 20 13:55:51.162661 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 20 13:55:51.162666 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 13:55:51.162671 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 13:55:51.162676 kernel: pnp: PnP ACPI init Jan 20 13:55:51.162682 kernel: pnp: PnP ACPI: found 0 devices Jan 20 13:55:51.162687 kernel: NET: Registered PF_INET protocol family Jan 20 13:55:51.162693 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 13:55:51.162698 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 13:55:51.162703 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 13:55:51.162708 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 13:55:51.162713 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 13:55:51.162719 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 13:55:51.162725 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 13:55:51.162730 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 13:55:51.162735 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 13:55:51.162740 kernel: PCI: CLS 0 bytes, default 64 Jan 20 13:55:51.162745 kernel: kvm [1]: HYP mode not available Jan 20 13:55:51.162751 kernel: Initialise system trusted keyrings Jan 20 13:55:51.162756 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 13:55:51.162762 kernel: Key type asymmetric registered Jan 20 13:55:51.162767 kernel: Asymmetric key parser 'x509' registered Jan 20 13:55:51.162772 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 20 13:55:51.162777 kernel: io scheduler mq-deadline registered Jan 20 13:55:51.162782 kernel: io scheduler kyber registered Jan 20 13:55:51.162787 kernel: io scheduler bfq registered Jan 20 13:55:51.162793 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 13:55:51.162799 kernel: thunder_xcv, ver 1.0 Jan 20 13:55:51.162804 kernel: thunder_bgx, ver 1.0 Jan 20 13:55:51.162809 kernel: nicpf, ver 1.0 Jan 20 13:55:51.162814 kernel: nicvf, ver 1.0 Jan 20 13:55:51.162945 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 20 13:55:51.163013 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-20T13:55:47 UTC (1768917347) Jan 20 13:55:51.163022 kernel: efifb: probing for efifb Jan 20 13:55:51.163027 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 20 13:55:51.163032 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 20 13:55:51.163038 kernel: efifb: scrolling: redraw Jan 20 13:55:51.163043 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 20 13:55:51.163048 kernel: Console: switching to colour frame buffer device 128x48 Jan 20 13:55:51.163053 kernel: fb0: EFI VGA frame buffer device Jan 20 13:55:51.163059 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 20 13:55:51.163064 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 20 13:55:51.163070 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 20 13:55:51.163075 kernel: NET: Registered PF_INET6 protocol family Jan 20 13:55:51.163080 kernel: watchdog: NMI not fully supported Jan 20 13:55:51.163086 kernel: watchdog: Hard watchdog permanently disabled Jan 20 13:55:51.163091 kernel: Segment Routing with IPv6 Jan 20 13:55:51.163097 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 13:55:51.163102 kernel: NET: Registered PF_PACKET protocol family Jan 20 13:55:51.163107 kernel: Key type dns_resolver registered Jan 20 13:55:51.163112 kernel: registered taskstats version 1 Jan 20 13:55:51.163117 kernel: Loading compiled-in X.509 certificates Jan 20 13:55:51.163122 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: b24bd8a65309b0fd569cb3209113db463649d1db' Jan 20 13:55:51.163128 kernel: Demotion targets for Node 0: null Jan 20 13:55:51.163134 kernel: Key type .fscrypt registered Jan 20 13:55:51.163139 kernel: Key type fscrypt-provisioning registered Jan 20 13:55:51.163144 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 13:55:51.163149 kernel: ima: Allocated hash algorithm: sha1 Jan 20 13:55:51.163154 kernel: ima: No architecture policies found Jan 20 13:55:51.163159 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 20 13:55:51.163165 kernel: clk: Disabling unused clocks Jan 20 13:55:51.163170 kernel: PM: genpd: Disabling unused power domains Jan 20 13:55:51.163176 kernel: Freeing unused kernel memory: 12480K Jan 20 13:55:51.163181 kernel: Run /init as init process Jan 20 13:55:51.163186 kernel: with arguments: Jan 20 13:55:51.163191 kernel: /init Jan 20 13:55:51.163196 kernel: with environment: Jan 20 13:55:51.163201 kernel: HOME=/ Jan 20 13:55:51.163206 kernel: TERM=linux Jan 20 13:55:51.163212 kernel: hv_vmbus: Vmbus version:5.3 Jan 20 13:55:51.163217 kernel: SCSI subsystem initialized Jan 20 13:55:51.163223 kernel: hv_vmbus: registering driver hid_hyperv Jan 20 13:55:51.163228 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 20 13:55:51.163313 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 20 13:55:51.163320 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 20 13:55:51.163327 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 20 13:55:51.163332 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 20 13:55:51.163337 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 20 13:55:51.163342 kernel: PTP clock support registered Jan 20 13:55:51.163348 kernel: hv_utils: Registering HyperV Utility Driver Jan 20 13:55:51.163353 kernel: hv_vmbus: registering driver hv_utils Jan 20 13:55:51.163358 kernel: hv_utils: Heartbeat IC version 3.0 Jan 20 13:55:51.163364 kernel: hv_utils: Shutdown IC version 3.2 Jan 20 13:55:51.163369 kernel: hv_utils: TimeSync IC version 4.0 Jan 20 13:55:51.163374 kernel: hv_vmbus: registering driver hv_storvsc Jan 20 13:55:51.163475 kernel: scsi host0: storvsc_host_t Jan 20 13:55:51.163555 kernel: scsi host1: storvsc_host_t Jan 20 13:55:51.163641 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 20 13:55:51.163724 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 20 13:55:51.163798 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 20 13:55:51.163872 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jan 20 13:55:51.163945 kernel: sd 1:0:0:0: [sda] Write Protect is off Jan 20 13:55:51.164018 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 20 13:55:51.164091 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 20 13:55:51.164171 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#61 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 20 13:55:51.164239 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#4 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 20 13:55:51.164246 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 13:55:51.164319 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jan 20 13:55:51.164436 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Jan 20 13:55:51.164447 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 13:55:51.164523 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Jan 20 13:55:51.164530 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 13:55:51.164536 kernel: device-mapper: uevent: version 1.0.3 Jan 20 13:55:51.164541 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 13:55:51.164546 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 20 13:55:51.164552 kernel: raid6: neonx8 gen() 18530 MB/s Jan 20 13:55:51.164558 kernel: raid6: neonx4 gen() 18571 MB/s Jan 20 13:55:51.164563 kernel: raid6: neonx2 gen() 17093 MB/s Jan 20 13:55:51.164568 kernel: raid6: neonx1 gen() 15020 MB/s Jan 20 13:55:51.164574 kernel: raid6: int64x8 gen() 10535 MB/s Jan 20 13:55:51.164579 kernel: raid6: int64x4 gen() 10609 MB/s Jan 20 13:55:51.164584 kernel: raid6: int64x2 gen() 8983 MB/s Jan 20 13:55:51.164589 kernel: raid6: int64x1 gen() 7012 MB/s Jan 20 13:55:51.164594 kernel: raid6: using algorithm neonx4 gen() 18571 MB/s Jan 20 13:55:51.164600 kernel: raid6: .... xor() 15130 MB/s, rmw enabled Jan 20 13:55:51.164606 kernel: raid6: using neon recovery algorithm Jan 20 13:55:51.164611 kernel: xor: measuring software checksum speed Jan 20 13:55:51.164616 kernel: 8regs : 26472 MB/sec Jan 20 13:55:51.164621 kernel: 32regs : 28800 MB/sec Jan 20 13:55:51.164626 kernel: arm64_neon : 36738 MB/sec Jan 20 13:55:51.164632 kernel: xor: using function: arm64_neon (36738 MB/sec) Jan 20 13:55:51.164638 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 13:55:51.164643 kernel: BTRFS: device fsid 8ecb0ee6-c293-489b-aa85-da46c9fc553d devid 1 transid 34 /dev/mapper/usr (254:0) scanned by mount (437) Jan 20 13:55:51.164649 kernel: BTRFS info (device dm-0): first mount of filesystem 8ecb0ee6-c293-489b-aa85-da46c9fc553d Jan 20 13:55:51.164654 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 20 13:55:51.164659 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 13:55:51.164665 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 13:55:51.164670 kernel: loop: module loaded Jan 20 13:55:51.164676 kernel: loop0: detected capacity change from 0 to 91832 Jan 20 13:55:51.164682 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 13:55:51.164688 systemd[1]: Successfully made /usr/ read-only. Jan 20 13:55:51.164695 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 13:55:51.164701 systemd[1]: Detected virtualization microsoft. Jan 20 13:55:51.164707 systemd[1]: Detected architecture arm64. Jan 20 13:55:51.164713 systemd[1]: Running in initrd. Jan 20 13:55:51.164719 systemd[1]: No hostname configured, using default hostname. Jan 20 13:55:51.164725 systemd[1]: Hostname set to . Jan 20 13:55:51.164730 systemd[1]: Initializing machine ID from random generator. Jan 20 13:55:51.164736 systemd[1]: Queued start job for default target initrd.target. Jan 20 13:55:51.164741 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 13:55:51.164748 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 13:55:51.164753 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 13:55:51.164760 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 13:55:51.164765 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 13:55:51.164772 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 13:55:51.164778 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 13:55:51.164784 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 13:55:51.164790 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 13:55:51.164795 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 13:55:51.164801 systemd[1]: Reached target paths.target - Path Units. Jan 20 13:55:51.164807 systemd[1]: Reached target slices.target - Slice Units. Jan 20 13:55:51.164813 systemd[1]: Reached target swap.target - Swaps. Jan 20 13:55:51.164818 systemd[1]: Reached target timers.target - Timer Units. Jan 20 13:55:51.164825 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 13:55:51.164831 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 13:55:51.164836 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 13:55:51.164842 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 13:55:51.164848 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 13:55:51.164854 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 13:55:51.164864 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 13:55:51.164871 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 13:55:51.164877 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 13:55:51.164883 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 13:55:51.164889 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 13:55:51.164895 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 13:55:51.164901 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 13:55:51.164907 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 13:55:51.164913 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 13:55:51.164919 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 13:55:51.164925 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 13:55:51.164932 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 13:55:51.164949 systemd-journald[576]: Collecting audit messages is enabled. Jan 20 13:55:51.164965 systemd-journald[576]: Journal started Jan 20 13:55:51.164979 systemd-journald[576]: Runtime Journal (/run/log/journal/226cb43cb55f4f0a92c59179b2c7a303) is 8M, max 78.3M, 70.3M free. Jan 20 13:55:51.183880 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 13:55:51.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.186868 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 13:55:51.209408 kernel: audit: type=1130 audit(1768917351.182:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.209435 kernel: audit: type=1130 audit(1768917351.202:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.204362 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 13:55:51.238898 kernel: audit: type=1130 audit(1768917351.223:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.238918 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 13:55:51.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.224801 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 13:55:51.263976 kernel: audit: type=1130 audit(1768917351.237:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.263997 kernel: Bridge firewalling registered Jan 20 13:55:51.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.258216 systemd-modules-load[579]: Inserted module 'br_netfilter' Jan 20 13:55:51.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.263426 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 13:55:51.292249 kernel: audit: type=1130 audit(1768917351.267:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.284174 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 13:55:51.302617 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 13:55:51.308922 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 13:55:51.330890 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 13:55:51.360405 kernel: audit: type=1130 audit(1768917351.334:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.337193 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 13:55:51.358853 systemd-tmpfiles[591]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 13:55:51.372929 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 13:55:51.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.392438 kernel: audit: type=1130 audit(1768917351.376:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.394565 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 13:55:51.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.420684 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 13:55:51.442799 kernel: audit: type=1130 audit(1768917351.399:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.442820 kernel: audit: type=1130 audit(1768917351.424:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.442722 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 13:55:51.440000 audit: BPF prog-id=6 op=LOAD Jan 20 13:55:51.456180 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 13:55:51.463412 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 13:55:51.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.486504 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 13:55:51.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.504815 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 13:55:51.578402 systemd-resolved[608]: Positive Trust Anchors: Jan 20 13:55:51.578415 systemd-resolved[608]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 13:55:51.578417 systemd-resolved[608]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 13:55:51.578436 systemd-resolved[608]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 13:55:51.645891 dracut-cmdline[617]: dracut-109 Jan 20 13:55:51.649625 dracut-cmdline[617]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=46cab7ab51efe59ae17cc7b7aabd508bb3cc66ebfc63b9c99ab53383296c1454 Jan 20 13:55:51.650280 systemd-resolved[608]: Defaulting to hostname 'linux'. Jan 20 13:55:51.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.651397 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 13:55:51.678952 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 13:55:51.789413 kernel: Loading iSCSI transport class v2.0-870. Jan 20 13:55:51.832413 kernel: iscsi: registered transport (tcp) Jan 20 13:55:51.864761 kernel: iscsi: registered transport (qla4xxx) Jan 20 13:55:51.864779 kernel: QLogic iSCSI HBA Driver Jan 20 13:55:51.914158 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 13:55:51.937488 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 13:55:51.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.947188 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 13:55:51.994927 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 13:55:52.007853 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 20 13:55:52.007873 kernel: audit: type=1130 audit(1768917351.998:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:51.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:52.001491 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 13:55:52.029474 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 13:55:52.060640 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 13:55:52.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:52.082000 audit: BPF prog-id=7 op=LOAD Jan 20 13:55:52.085892 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 13:55:52.104775 kernel: audit: type=1130 audit(1768917352.064:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:52.104797 kernel: audit: type=1334 audit(1768917352.082:18): prog-id=7 op=LOAD Jan 20 13:55:52.104805 kernel: audit: type=1334 audit(1768917352.082:19): prog-id=8 op=LOAD Jan 20 13:55:52.082000 audit: BPF prog-id=8 op=LOAD Jan 20 13:55:52.158428 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 13:55:52.185081 kernel: audit: type=1130 audit(1768917352.164:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:52.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:52.202969 systemd-udevd[849]: Using default interface naming scheme 'v257'. Jan 20 13:55:52.212998 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 13:55:52.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:52.238236 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 13:55:52.248194 kernel: audit: type=1130 audit(1768917352.218:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:52.254365 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 13:55:52.267520 kernel: audit: type=1334 audit(1768917352.252:22): prog-id=9 op=LOAD Jan 20 13:55:52.252000 audit: BPF prog-id=9 op=LOAD Jan 20 13:55:52.273042 dracut-pre-trigger[950]: rd.md=0: removing MD RAID activation Jan 20 13:55:52.299616 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 13:55:52.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:52.308700 systemd-networkd[951]: lo: Link UP Jan 20 13:55:52.344261 kernel: audit: type=1130 audit(1768917352.311:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:52.344286 kernel: audit: type=1130 audit(1768917352.328:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:52.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:52.308703 systemd-networkd[951]: lo: Gained carrier Jan 20 13:55:52.312146 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 13:55:52.330544 systemd[1]: Reached target network.target - Network. Jan 20 13:55:52.351908 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 13:55:52.406467 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 13:55:52.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:52.435217 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 13:55:52.450293 kernel: audit: type=1130 audit(1768917352.413:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:52.516410 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 13:55:52.523501 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 13:55:52.535901 kernel: hv_vmbus: registering driver hv_netvsc Jan 20 13:55:52.528877 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 13:55:52.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:52.538584 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 13:55:52.544798 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 13:55:52.584857 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 13:55:52.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:52.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:52.584989 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 13:55:52.607399 kernel: hv_netvsc 000d3af7-ab79-000d-3af7-ab79000d3af7 eth0: VF slot 1 added Jan 20 13:55:52.611516 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 13:55:52.625856 systemd-networkd[951]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 13:55:52.625869 systemd-networkd[951]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 13:55:52.627653 systemd-networkd[951]: eth0: Link UP Jan 20 13:55:52.627721 systemd-networkd[951]: eth0: Gained carrier Jan 20 13:55:52.654551 kernel: hv_vmbus: registering driver hv_pci Jan 20 13:55:52.654576 kernel: hv_pci 8f366e96-13b5-4cb2-9f67-729b81274afd: PCI VMBus probing: Using version 0x10004 Jan 20 13:55:52.627730 systemd-networkd[951]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 13:55:52.670364 kernel: hv_pci 8f366e96-13b5-4cb2-9f67-729b81274afd: PCI host bridge to bus 13b5:00 Jan 20 13:55:52.670684 kernel: pci_bus 13b5:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 20 13:55:52.680028 kernel: pci_bus 13b5:00: No busn resource found for root bus, will use [bus 00-ff] Jan 20 13:55:52.686774 kernel: pci 13b5:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jan 20 13:55:52.689429 systemd-networkd[951]: eth0: DHCPv4 address 10.200.20.32/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 20 13:55:52.706897 kernel: pci 13b5:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 20 13:55:52.702021 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 13:55:52.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:52.715404 kernel: pci 13b5:00:02.0: enabling Extended Tags Jan 20 13:55:52.731500 kernel: pci 13b5:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 13b5:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jan 20 13:55:52.740819 kernel: pci_bus 13b5:00: busn_res: [bus 00-ff] end is updated to 00 Jan 20 13:55:52.740991 kernel: pci 13b5:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jan 20 13:55:52.947366 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 20 13:55:52.961511 kernel: mlx5_core 13b5:00:02.0: enabling device (0000 -> 0002) Jan 20 13:55:52.961693 kernel: mlx5_core 13b5:00:02.0: PTM is not supported by PCIe Jan 20 13:55:52.965691 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 13:55:52.973908 kernel: mlx5_core 13b5:00:02.0: firmware version: 16.30.5026 Jan 20 13:55:53.068333 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 20 13:55:53.084904 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 20 13:55:53.111505 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 20 13:55:53.188957 kernel: hv_netvsc 000d3af7-ab79-000d-3af7-ab79000d3af7 eth0: VF registering: eth1 Jan 20 13:55:53.189146 kernel: mlx5_core 13b5:00:02.0 eth1: joined to eth0 Jan 20 13:55:53.196374 kernel: mlx5_core 13b5:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 20 13:55:53.207992 systemd-networkd[951]: eth1: Interface name change detected, renamed to enP5045s1. Jan 20 13:55:53.214636 kernel: mlx5_core 13b5:00:02.0 enP5045s1: renamed from eth1 Jan 20 13:55:53.293457 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 13:55:53.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:53.299633 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 13:55:53.309019 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 13:55:53.319713 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 13:55:53.329765 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 13:55:53.354268 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 13:55:53.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:53.394402 kernel: mlx5_core 13b5:00:02.0 enP5045s1: Link up Jan 20 13:55:53.426007 systemd-networkd[951]: enP5045s1: Link UP Jan 20 13:55:53.429446 kernel: hv_netvsc 000d3af7-ab79-000d-3af7-ab79000d3af7 eth0: Data path switched to VF: enP5045s1 Jan 20 13:55:53.627579 systemd-networkd[951]: enP5045s1: Gained carrier Jan 20 13:55:53.988131 systemd-networkd[951]: eth0: Gained IPv6LL Jan 20 13:55:54.206234 disk-uuid[1065]: Warning: The kernel is still using the old partition table. Jan 20 13:55:54.206234 disk-uuid[1065]: The new table will be used at the next reboot or after you Jan 20 13:55:54.206234 disk-uuid[1065]: run partprobe(8) or kpartx(8) Jan 20 13:55:54.206234 disk-uuid[1065]: The operation has completed successfully. Jan 20 13:55:54.223175 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 13:55:54.223515 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 13:55:54.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:54.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:54.232689 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 13:55:54.287338 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1226) Jan 20 13:55:54.287400 kernel: BTRFS info (device sda6): first mount of filesystem da228fb2-1a6d-4b16-9d50-3527607bcd2e Jan 20 13:55:54.292061 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 13:55:54.367171 kernel: BTRFS info (device sda6): turning on async discard Jan 20 13:55:54.367234 kernel: BTRFS info (device sda6): enabling free space tree Jan 20 13:55:54.377552 kernel: BTRFS info (device sda6): last unmount of filesystem da228fb2-1a6d-4b16-9d50-3527607bcd2e Jan 20 13:55:54.377794 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 13:55:54.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:54.382997 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 13:55:55.511816 ignition[1245]: Ignition 2.24.0 Jan 20 13:55:55.511829 ignition[1245]: Stage: fetch-offline Jan 20 13:55:55.515229 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 13:55:55.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:55.513202 ignition[1245]: no configs at "/usr/lib/ignition/base.d" Jan 20 13:55:55.523066 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 20 13:55:55.513224 ignition[1245]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 13:55:55.513320 ignition[1245]: parsed url from cmdline: "" Jan 20 13:55:55.513323 ignition[1245]: no config URL provided Jan 20 13:55:55.513381 ignition[1245]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 13:55:55.513399 ignition[1245]: no config at "/usr/lib/ignition/user.ign" Jan 20 13:55:55.513402 ignition[1245]: failed to fetch config: resource requires networking Jan 20 13:55:55.513585 ignition[1245]: Ignition finished successfully Jan 20 13:55:55.554025 ignition[1252]: Ignition 2.24.0 Jan 20 13:55:55.554030 ignition[1252]: Stage: fetch Jan 20 13:55:55.554400 ignition[1252]: no configs at "/usr/lib/ignition/base.d" Jan 20 13:55:55.554409 ignition[1252]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 13:55:55.554497 ignition[1252]: parsed url from cmdline: "" Jan 20 13:55:55.554499 ignition[1252]: no config URL provided Jan 20 13:55:55.554502 ignition[1252]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 13:55:55.554507 ignition[1252]: no config at "/usr/lib/ignition/user.ign" Jan 20 13:55:55.554523 ignition[1252]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 20 13:55:55.686647 ignition[1252]: GET result: OK Jan 20 13:55:55.686710 ignition[1252]: config has been read from IMDS userdata Jan 20 13:55:55.686723 ignition[1252]: parsing config with SHA512: 5d60b391291fc5badb2a8853f1976e321807247373080ff6996f616d639b67f20a126e20344d8808f41b219868f1f5f3aa32c56c4bb72919a34d173e84b205ec Jan 20 13:55:55.691784 unknown[1252]: fetched base config from "system" Jan 20 13:55:55.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:55.692073 ignition[1252]: fetch: fetch complete Jan 20 13:55:55.691790 unknown[1252]: fetched base config from "system" Jan 20 13:55:55.692077 ignition[1252]: fetch: fetch passed Jan 20 13:55:55.691794 unknown[1252]: fetched user config from "azure" Jan 20 13:55:55.692120 ignition[1252]: Ignition finished successfully Jan 20 13:55:55.693970 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 20 13:55:55.702642 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 13:55:55.737809 ignition[1259]: Ignition 2.24.0 Jan 20 13:55:55.737826 ignition[1259]: Stage: kargs Jan 20 13:55:55.738015 ignition[1259]: no configs at "/usr/lib/ignition/base.d" Jan 20 13:55:55.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:55.743704 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 13:55:55.738022 ignition[1259]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 13:55:55.750605 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 13:55:55.738642 ignition[1259]: kargs: kargs passed Jan 20 13:55:55.738686 ignition[1259]: Ignition finished successfully Jan 20 13:55:55.779475 ignition[1265]: Ignition 2.24.0 Jan 20 13:55:55.779483 ignition[1265]: Stage: disks Jan 20 13:55:55.783606 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 13:55:55.779690 ignition[1265]: no configs at "/usr/lib/ignition/base.d" Jan 20 13:55:55.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:55.793543 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 13:55:55.779705 ignition[1265]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 13:55:55.801790 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 13:55:55.780400 ignition[1265]: disks: disks passed Jan 20 13:55:55.810970 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 13:55:55.780448 ignition[1265]: Ignition finished successfully Jan 20 13:55:55.819193 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 13:55:55.828270 systemd[1]: Reached target basic.target - Basic System. Jan 20 13:55:55.836558 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 13:55:55.920673 systemd-fsck[1273]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Jan 20 13:55:55.930033 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 13:55:55.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:55.936400 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 13:55:56.200418 kernel: EXT4-fs (sda9): mounted filesystem d96b7832-e07c-4b58-bdbc-7171b491c7db r/w with ordered data mode. Quota mode: none. Jan 20 13:55:56.200597 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 13:55:56.207755 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 13:55:56.243729 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 13:55:56.249044 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 13:55:56.264516 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 20 13:55:56.275264 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 13:55:56.275309 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 13:55:56.290588 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 13:55:56.301470 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 13:55:56.321406 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1287) Jan 20 13:55:56.330691 kernel: BTRFS info (device sda6): first mount of filesystem da228fb2-1a6d-4b16-9d50-3527607bcd2e Jan 20 13:55:56.330738 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 13:55:56.340724 kernel: BTRFS info (device sda6): turning on async discard Jan 20 13:55:56.340760 kernel: BTRFS info (device sda6): enabling free space tree Jan 20 13:55:56.342624 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 13:55:56.891820 coreos-metadata[1289]: Jan 20 13:55:56.891 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 20 13:55:56.899928 coreos-metadata[1289]: Jan 20 13:55:56.899 INFO Fetch successful Jan 20 13:55:56.899928 coreos-metadata[1289]: Jan 20 13:55:56.899 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 20 13:55:56.913331 coreos-metadata[1289]: Jan 20 13:55:56.913 INFO Fetch successful Jan 20 13:55:56.927248 coreos-metadata[1289]: Jan 20 13:55:56.927 INFO wrote hostname ci-9999.1.1-f-6b32856eb5 to /sysroot/etc/hostname Jan 20 13:55:56.935606 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 13:55:56.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:58.328492 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 13:55:58.340919 kernel: kauditd_printk_skb: 15 callbacks suppressed Jan 20 13:55:58.340942 kernel: audit: type=1130 audit(1768917358.332:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:58.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:58.342363 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 13:55:58.361615 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 13:55:58.401150 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 13:55:58.411581 kernel: BTRFS info (device sda6): last unmount of filesystem da228fb2-1a6d-4b16-9d50-3527607bcd2e Jan 20 13:55:58.422923 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 13:55:58.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:58.444445 kernel: audit: type=1130 audit(1768917358.430:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:58.447145 ignition[1393]: INFO : Ignition 2.24.0 Jan 20 13:55:58.447145 ignition[1393]: INFO : Stage: mount Jan 20 13:55:58.459082 ignition[1393]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 13:55:58.459082 ignition[1393]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 13:55:58.459082 ignition[1393]: INFO : mount: mount passed Jan 20 13:55:58.459082 ignition[1393]: INFO : Ignition finished successfully Jan 20 13:55:58.498095 kernel: audit: type=1130 audit(1768917358.462:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:58.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:55:58.454633 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 13:55:58.464817 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 13:55:58.494601 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 13:55:58.521402 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1402) Jan 20 13:55:58.533242 kernel: BTRFS info (device sda6): first mount of filesystem da228fb2-1a6d-4b16-9d50-3527607bcd2e Jan 20 13:55:58.533292 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 13:55:58.543531 kernel: BTRFS info (device sda6): turning on async discard Jan 20 13:55:58.543570 kernel: BTRFS info (device sda6): enabling free space tree Jan 20 13:55:58.545310 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 13:55:58.573626 ignition[1419]: INFO : Ignition 2.24.0 Jan 20 13:55:58.573626 ignition[1419]: INFO : Stage: files Jan 20 13:55:58.580128 ignition[1419]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 13:55:58.580128 ignition[1419]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 13:55:58.580128 ignition[1419]: DEBUG : files: compiled without relabeling support, skipping Jan 20 13:55:58.580128 ignition[1419]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 13:55:58.580128 ignition[1419]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 13:55:58.639484 ignition[1419]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 13:55:58.644911 ignition[1419]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 13:55:58.644911 ignition[1419]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 13:55:58.639885 unknown[1419]: wrote ssh authorized keys file for user: core Jan 20 13:55:58.703605 ignition[1419]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 20 13:55:58.711553 ignition[1419]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 20 13:55:58.747766 ignition[1419]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 13:55:58.908463 ignition[1419]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 20 13:55:58.908463 ignition[1419]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 13:55:58.908463 ignition[1419]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 13:55:58.908463 ignition[1419]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 13:55:58.938470 ignition[1419]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 13:55:58.938470 ignition[1419]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 13:55:58.938470 ignition[1419]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 13:55:58.938470 ignition[1419]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 13:55:58.938470 ignition[1419]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 13:55:58.974322 ignition[1419]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 13:55:58.974322 ignition[1419]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 13:55:58.974322 ignition[1419]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 20 13:55:58.974322 ignition[1419]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 20 13:55:58.974322 ignition[1419]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 20 13:55:58.974322 ignition[1419]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 20 13:55:59.618132 ignition[1419]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 13:56:00.767046 ignition[1419]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 20 13:56:00.767046 ignition[1419]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 13:56:00.798565 ignition[1419]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 13:56:00.816528 ignition[1419]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 13:56:00.816528 ignition[1419]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 13:56:00.816528 ignition[1419]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 20 13:56:00.816528 ignition[1419]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 13:56:00.816528 ignition[1419]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 13:56:00.816528 ignition[1419]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 13:56:00.816528 ignition[1419]: INFO : files: files passed Jan 20 13:56:00.816528 ignition[1419]: INFO : Ignition finished successfully Jan 20 13:56:00.896165 kernel: audit: type=1130 audit(1768917360.829:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:00.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:00.825734 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 13:56:00.848827 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 13:56:00.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:00.881117 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 13:56:00.939029 kernel: audit: type=1130 audit(1768917360.905:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:00.939056 kernel: audit: type=1131 audit(1768917360.905:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:00.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:00.889736 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 13:56:00.896724 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 13:56:00.952739 initrd-setup-root-after-ignition[1450]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 13:56:00.952739 initrd-setup-root-after-ignition[1450]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 13:56:00.966617 initrd-setup-root-after-ignition[1454]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 13:56:00.967605 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 13:56:00.999851 kernel: audit: type=1130 audit(1768917360.977:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:00.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:00.978559 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 13:56:01.005336 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 13:56:01.050378 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 13:56:01.050525 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 13:56:01.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.060465 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 13:56:01.099461 kernel: audit: type=1130 audit(1768917361.058:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.099487 kernel: audit: type=1131 audit(1768917361.058:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.081469 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 13:56:01.100420 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 13:56:01.101300 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 13:56:01.138482 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 13:56:01.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.160198 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 13:56:01.170706 kernel: audit: type=1130 audit(1768917361.142:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.185314 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 13:56:01.185523 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 13:56:01.194911 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 13:56:01.204140 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 13:56:01.212348 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 13:56:01.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.212489 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 13:56:01.225146 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 13:56:01.230809 systemd[1]: Stopped target basic.target - Basic System. Jan 20 13:56:01.239538 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 13:56:01.248108 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 13:56:01.256388 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 13:56:01.265980 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 13:56:01.282044 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 13:56:01.286760 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 13:56:01.296851 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 13:56:01.301196 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 13:56:01.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.310270 systemd[1]: Stopped target swap.target - Swaps. Jan 20 13:56:01.319029 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 13:56:01.319211 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 13:56:01.331327 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 13:56:01.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.336039 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 13:56:01.344625 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 13:56:01.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.344721 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 13:56:01.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.353777 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 13:56:01.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.353940 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 13:56:01.367692 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 13:56:01.367875 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 13:56:01.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.378548 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 13:56:01.378671 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 13:56:01.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.457575 ignition[1474]: INFO : Ignition 2.24.0 Jan 20 13:56:01.457575 ignition[1474]: INFO : Stage: umount Jan 20 13:56:01.457575 ignition[1474]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 13:56:01.457575 ignition[1474]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 13:56:01.457575 ignition[1474]: INFO : umount: umount passed Jan 20 13:56:01.457575 ignition[1474]: INFO : Ignition finished successfully Jan 20 13:56:01.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.386480 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 20 13:56:01.386617 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 13:56:01.400492 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 13:56:01.411646 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 13:56:01.419779 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 13:56:01.420062 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 13:56:01.430342 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 13:56:01.430504 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 13:56:01.453095 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 13:56:01.453219 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 13:56:01.462652 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 13:56:01.464562 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 13:56:01.591977 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 13:56:01.592103 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 13:56:01.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.602750 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 13:56:01.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.602849 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 13:56:01.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.608162 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 13:56:01.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.608218 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 13:56:01.617090 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 20 13:56:01.617151 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 20 13:56:01.625167 systemd[1]: Stopped target network.target - Network. Jan 20 13:56:01.629041 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 13:56:01.629108 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 13:56:01.637922 systemd[1]: Stopped target paths.target - Path Units. Jan 20 13:56:01.645845 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 13:56:01.653655 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 13:56:01.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.658847 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 13:56:01.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.662537 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 13:56:01.670468 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 13:56:01.670521 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 13:56:01.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.678652 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 13:56:01.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.678704 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 13:56:01.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.692142 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 20 13:56:01.692175 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 20 13:56:01.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.699867 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 13:56:01.776000 audit: BPF prog-id=9 op=UNLOAD Jan 20 13:56:01.780000 audit: BPF prog-id=6 op=UNLOAD Jan 20 13:56:01.699933 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 13:56:01.708276 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 13:56:01.708318 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 13:56:01.718921 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 13:56:01.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.725868 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 13:56:01.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.736019 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 13:56:01.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.736578 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 13:56:01.736677 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 13:56:01.744462 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 13:56:01.744577 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 13:56:01.752720 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 13:56:01.752839 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 13:56:01.765830 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 13:56:01.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.765947 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 13:56:01.779160 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 13:56:01.787108 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 13:56:01.918502 kernel: hv_netvsc 000d3af7-ab79-000d-3af7-ab79000d3af7 eth0: Data path switched from VF: enP5045s1 Jan 20 13:56:01.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.787155 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 13:56:01.796701 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 13:56:01.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.806341 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 13:56:01.806432 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 13:56:01.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.816209 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 13:56:01.816268 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 13:56:01.824267 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 13:56:01.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.824304 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 13:56:01.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.838405 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 13:56:01.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.874404 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 13:56:02.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.874536 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 13:56:02.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.884474 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 13:56:02.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:02.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:01.884520 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 13:56:01.892954 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 13:56:01.892987 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 13:56:01.901271 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 13:56:01.901318 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 13:56:01.919600 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 13:56:01.919657 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 13:56:01.931914 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 13:56:01.931998 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 13:56:01.947443 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 13:56:01.960001 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 13:56:01.960083 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 13:56:01.965258 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 13:56:01.965311 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 13:56:01.974881 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 13:56:01.974932 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 13:56:01.980165 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 13:56:01.980200 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 13:56:01.992998 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 13:56:01.993039 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 13:56:02.003886 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 13:56:02.137601 systemd-journald[576]: Received SIGTERM from PID 1 (systemd). Jan 20 13:56:02.006036 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 13:56:02.010914 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 13:56:02.010995 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 13:56:02.023293 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 13:56:02.032250 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 13:56:02.056079 systemd[1]: Switching root. Jan 20 13:56:02.161896 systemd-journald[576]: Journal stopped Jan 20 13:56:07.846343 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 13:56:07.846365 kernel: SELinux: policy capability open_perms=1 Jan 20 13:56:07.846374 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 13:56:07.846380 kernel: SELinux: policy capability always_check_network=0 Jan 20 13:56:07.846403 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 13:56:07.846409 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 13:56:07.846416 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 13:56:07.846422 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 13:56:07.846428 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 13:56:07.846436 systemd[1]: Successfully loaded SELinux policy in 149.911ms. Jan 20 13:56:07.846444 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.652ms. Jan 20 13:56:07.846452 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 13:56:07.846459 systemd[1]: Detected virtualization microsoft. Jan 20 13:56:07.846465 systemd[1]: Detected architecture arm64. Jan 20 13:56:07.846473 systemd[1]: Detected first boot. Jan 20 13:56:07.846480 systemd[1]: Hostname set to . Jan 20 13:56:07.846486 systemd[1]: Initializing machine ID from random generator. Jan 20 13:56:07.846493 kernel: kauditd_printk_skb: 41 callbacks suppressed Jan 20 13:56:07.846499 kernel: audit: type=1334 audit(1768917363.770:92): prog-id=10 op=LOAD Jan 20 13:56:07.846507 kernel: audit: type=1334 audit(1768917363.770:93): prog-id=10 op=UNLOAD Jan 20 13:56:07.846512 kernel: audit: type=1334 audit(1768917363.773:94): prog-id=11 op=LOAD Jan 20 13:56:07.846519 kernel: audit: type=1334 audit(1768917363.773:95): prog-id=11 op=UNLOAD Jan 20 13:56:07.846525 zram_generator::config[1516]: No configuration found. Jan 20 13:56:07.846532 kernel: NET: Registered PF_VSOCK protocol family Jan 20 13:56:07.846539 systemd[1]: Populated /etc with preset unit settings. Jan 20 13:56:07.846546 kernel: audit: type=1334 audit(1768917366.951:96): prog-id=12 op=LOAD Jan 20 13:56:07.846552 kernel: audit: type=1334 audit(1768917366.951:97): prog-id=3 op=UNLOAD Jan 20 13:56:07.846558 kernel: audit: type=1334 audit(1768917366.956:98): prog-id=13 op=LOAD Jan 20 13:56:07.846564 kernel: audit: type=1334 audit(1768917366.959:99): prog-id=14 op=LOAD Jan 20 13:56:07.846572 kernel: audit: type=1334 audit(1768917366.959:100): prog-id=4 op=UNLOAD Jan 20 13:56:07.846578 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 13:56:07.846585 kernel: audit: type=1334 audit(1768917366.959:101): prog-id=5 op=UNLOAD Jan 20 13:56:07.846591 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 13:56:07.846598 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 13:56:07.846606 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 13:56:07.846612 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 13:56:07.846619 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 13:56:07.846626 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 13:56:07.846633 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 13:56:07.846640 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 13:56:07.846648 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 13:56:07.846655 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 13:56:07.846662 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 13:56:07.846668 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 13:56:07.846676 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 13:56:07.846683 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 13:56:07.846689 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 13:56:07.846696 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 13:56:07.846702 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 20 13:56:07.846709 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 13:56:07.846716 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 13:56:07.846724 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 13:56:07.846731 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 13:56:07.846737 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 13:56:07.846744 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 13:56:07.846751 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 13:56:07.846757 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 13:56:07.846765 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 20 13:56:07.846771 systemd[1]: Reached target slices.target - Slice Units. Jan 20 13:56:07.846778 systemd[1]: Reached target swap.target - Swaps. Jan 20 13:56:07.846784 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 13:56:07.846791 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 13:56:07.846799 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 13:56:07.846806 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 13:56:07.846812 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 20 13:56:07.846819 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 13:56:07.846826 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 20 13:56:07.846833 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 20 13:56:07.846840 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 13:56:07.846847 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 13:56:07.846853 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 13:56:07.846862 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 13:56:07.846869 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 13:56:07.846875 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 13:56:07.846883 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 13:56:07.846890 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 13:56:07.846896 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 13:56:07.846903 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 13:56:07.846910 systemd[1]: Reached target machines.target - Containers. Jan 20 13:56:07.846917 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 13:56:07.846925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 13:56:07.846932 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 13:56:07.846938 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 13:56:07.846945 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 13:56:07.846952 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 13:56:07.846959 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 13:56:07.846965 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 13:56:07.846973 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 13:56:07.846980 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 13:56:07.846987 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 13:56:07.846993 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 13:56:07.847000 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 13:56:07.847008 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 13:56:07.847015 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 13:56:07.847023 kernel: fuse: init (API version 7.41) Jan 20 13:56:07.847029 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 13:56:07.847037 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 13:56:07.847043 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 13:56:07.847050 kernel: ACPI: bus type drm_connector registered Jan 20 13:56:07.847056 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 13:56:07.847081 systemd-journald[1598]: Collecting audit messages is enabled. Jan 20 13:56:07.847096 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 13:56:07.847104 systemd-journald[1598]: Journal started Jan 20 13:56:07.847121 systemd-journald[1598]: Runtime Journal (/run/log/journal/5dab4c8dffd0493d93f57c18aa9d7d41) is 8M, max 78.3M, 70.3M free. Jan 20 13:56:07.299000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 20 13:56:07.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:07.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:07.750000 audit: BPF prog-id=14 op=UNLOAD Jan 20 13:56:07.750000 audit: BPF prog-id=13 op=UNLOAD Jan 20 13:56:07.754000 audit: BPF prog-id=15 op=LOAD Jan 20 13:56:07.754000 audit: BPF prog-id=16 op=LOAD Jan 20 13:56:07.754000 audit: BPF prog-id=17 op=LOAD Jan 20 13:56:07.841000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 13:56:07.841000 audit[1598]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffe711be10 a2=4000 a3=0 items=0 ppid=1 pid=1598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:07.841000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 20 13:56:06.933984 systemd[1]: Queued start job for default target multi-user.target. Jan 20 13:56:06.961566 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 20 13:56:06.962053 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 13:56:06.963590 systemd[1]: systemd-journald.service: Consumed 2.539s CPU time. Jan 20 13:56:07.876724 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 13:56:07.891680 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 13:56:07.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:07.892957 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 13:56:07.897451 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 13:56:07.905162 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 13:56:07.909996 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 13:56:07.914803 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 13:56:07.919886 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 13:56:07.924433 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 13:56:07.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:07.930337 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 13:56:07.932461 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 13:56:07.938253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 13:56:07.938414 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 13:56:07.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:07.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:07.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:07.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:07.944909 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 13:56:07.945053 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 13:56:07.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:07.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:07.949763 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 13:56:07.949903 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 13:56:07.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:07.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:07.955323 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 13:56:07.955487 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 13:56:07.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:07.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:07.960137 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 13:56:07.960282 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 13:56:07.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:07.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:07.966140 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 13:56:07.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:07.971332 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 13:56:07.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:07.977234 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 13:56:07.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:07.989308 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 13:56:07.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:08.001497 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 13:56:08.007585 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 20 13:56:08.016536 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 13:56:08.033880 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 13:56:08.038800 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 13:56:08.038837 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 13:56:08.043927 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 13:56:08.049504 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 13:56:08.049616 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 13:56:08.074967 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 13:56:08.087105 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 13:56:08.092302 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 13:56:08.093280 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 13:56:08.098143 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 13:56:08.099192 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 13:56:08.105890 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 13:56:08.113018 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 13:56:08.120206 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 13:56:08.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:08.126319 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 13:56:08.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:08.131967 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 13:56:08.137001 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 13:56:08.142865 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 13:56:08.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:08.159279 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 13:56:08.164435 kernel: loop1: detected capacity change from 0 to 353272 Jan 20 13:56:08.167171 systemd-journald[1598]: Time spent on flushing to /var/log/journal/5dab4c8dffd0493d93f57c18aa9d7d41 is 12.667ms for 1086 entries. Jan 20 13:56:08.167171 systemd-journald[1598]: System Journal (/var/log/journal/5dab4c8dffd0493d93f57c18aa9d7d41) is 8M, max 2.2G, 2.2G free. Jan 20 13:56:08.196553 systemd-journald[1598]: Received client request to flush runtime journal. Jan 20 13:56:08.170589 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 13:56:08.198474 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 13:56:08.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:08.205516 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 13:56:08.212319 systemd-tmpfiles[1656]: ACLs are not supported, ignoring. Jan 20 13:56:08.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:08.212334 systemd-tmpfiles[1656]: ACLs are not supported, ignoring. Jan 20 13:56:08.220943 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 13:56:08.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:08.228581 kernel: loop1: p1 p2 p3 Jan 20 13:56:08.228811 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 13:56:08.258577 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 13:56:08.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:08.428289 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 13:56:08.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:08.434000 audit: BPF prog-id=18 op=LOAD Jan 20 13:56:08.435000 audit: BPF prog-id=19 op=LOAD Jan 20 13:56:08.435000 audit: BPF prog-id=20 op=LOAD Jan 20 13:56:08.439687 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 20 13:56:08.447000 audit: BPF prog-id=21 op=LOAD Jan 20 13:56:08.451681 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 13:56:08.461257 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 13:56:08.469000 audit: BPF prog-id=22 op=LOAD Jan 20 13:56:08.470000 audit: BPF prog-id=23 op=LOAD Jan 20 13:56:08.470000 audit: BPF prog-id=24 op=LOAD Jan 20 13:56:08.474547 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 20 13:56:08.480000 audit: BPF prog-id=25 op=LOAD Jan 20 13:56:08.481000 audit: BPF prog-id=26 op=LOAD Jan 20 13:56:08.481000 audit: BPF prog-id=27 op=LOAD Jan 20 13:56:08.483524 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 13:56:08.505211 systemd-tmpfiles[1676]: ACLs are not supported, ignoring. Jan 20 13:56:08.507357 systemd-tmpfiles[1676]: ACLs are not supported, ignoring. Jan 20 13:56:08.513495 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 13:56:08.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:08.544726 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 13:56:08.547908 systemd-nsresourced[1677]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 20 13:56:08.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:08.551751 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 20 13:56:08.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:08.631314 systemd-oomd[1674]: No swap; memory pressure usage will be degraded Jan 20 13:56:08.632223 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 20 13:56:08.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:08.652765 systemd-resolved[1675]: Positive Trust Anchors: Jan 20 13:56:08.652787 systemd-resolved[1675]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 13:56:08.652791 systemd-resolved[1675]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 13:56:08.652810 systemd-resolved[1675]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 13:56:08.690823 systemd-resolved[1675]: Using system hostname 'ci-9999.1.1-f-6b32856eb5'. Jan 20 13:56:08.692283 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 13:56:08.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:08.697630 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 13:56:08.756425 kernel: erofs: (device loop1p1): mounted with root inode @ nid 39. Jan 20 13:56:08.761472 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 13:56:08.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:08.765000 audit: BPF prog-id=8 op=UNLOAD Jan 20 13:56:08.765000 audit: BPF prog-id=7 op=UNLOAD Jan 20 13:56:08.766000 audit: BPF prog-id=28 op=LOAD Jan 20 13:56:08.766000 audit: BPF prog-id=29 op=LOAD Jan 20 13:56:08.768819 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 13:56:08.799896 systemd-udevd[1698]: Using default interface naming scheme 'v257'. Jan 20 13:56:08.821416 kernel: loop2: detected capacity change from 0 to 161080 Jan 20 13:56:08.851412 kernel: loop2: p1 p2 p3 Jan 20 13:56:08.977243 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 13:56:09.136944 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 13:56:09.161094 kernel: kauditd_printk_skb: 61 callbacks suppressed Jan 20 13:56:09.161205 kernel: audit: type=1130 audit(1768917369.142:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:09.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:09.151596 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 13:56:09.146000 audit: BPF prog-id=30 op=LOAD Jan 20 13:56:09.169379 kernel: audit: type=1334 audit(1768917369.146:162): prog-id=30 op=LOAD Jan 20 13:56:09.226784 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 20 13:56:09.273419 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#26 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 13:56:09.297417 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 13:56:09.301421 kernel: hv_vmbus: registering driver hyperv_fb Jan 20 13:56:09.302503 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 20 13:56:09.309634 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 20 13:56:09.313355 kernel: Console: switching to colour dummy device 80x25 Jan 20 13:56:09.320414 kernel: hv_vmbus: registering driver hv_balloon Jan 20 13:56:09.320516 kernel: Console: switching to colour frame buffer device 128x48 Jan 20 13:56:09.331532 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 20 13:56:09.331635 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 20 13:56:09.343413 systemd-networkd[1707]: lo: Link UP Jan 20 13:56:09.343422 systemd-networkd[1707]: lo: Gained carrier Jan 20 13:56:09.344716 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 13:56:09.349545 systemd-networkd[1707]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 13:56:09.349556 systemd-networkd[1707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 13:56:09.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:09.350871 systemd[1]: Reached target network.target - Network. Jan 20 13:56:09.369410 kernel: audit: type=1130 audit(1768917369.349:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:09.369538 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 13:56:09.379614 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 13:56:09.429409 kernel: mlx5_core 13b5:00:02.0 enP5045s1: Link up Jan 20 13:56:09.440003 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 13:56:09.462407 kernel: hv_netvsc 000d3af7-ab79-000d-3af7-ab79000d3af7 eth0: Data path switched to VF: enP5045s1 Jan 20 13:56:09.463309 systemd-networkd[1707]: enP5045s1: Link UP Jan 20 13:56:09.464092 systemd-networkd[1707]: eth0: Link UP Jan 20 13:56:09.464101 systemd-networkd[1707]: eth0: Gained carrier Jan 20 13:56:09.464122 systemd-networkd[1707]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 13:56:09.471982 systemd-networkd[1707]: enP5045s1: Gained carrier Jan 20 13:56:09.479214 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 13:56:09.484693 systemd-networkd[1707]: eth0: DHCPv4 address 10.200.20.32/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 20 13:56:09.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:09.509413 kernel: audit: type=1130 audit(1768917369.488:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:09.509503 kernel: MACsec IEEE 802.1AE Jan 20 13:56:09.512567 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 13:56:09.513361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 13:56:09.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:09.531520 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 13:56:09.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:09.554940 kernel: audit: type=1130 audit(1768917369.523:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:09.555107 kernel: audit: type=1131 audit(1768917369.523:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:09.562716 kernel: erofs: (device loop2p1): mounted with root inode @ nid 39. Jan 20 13:56:09.617621 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 20 13:56:09.623700 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 13:56:09.673175 kernel: loop3: detected capacity change from 0 to 159496 Jan 20 13:56:09.675492 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 13:56:09.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:09.695431 kernel: audit: type=1130 audit(1768917369.680:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:09.699615 kernel: loop3: p1 p2 p3 Jan 20 13:56:10.075913 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 13:56:10.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:10.095464 kernel: audit: type=1130 audit(1768917370.079:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:10.130414 kernel: erofs: (device loop3p1): mounted with root inode @ nid 39. Jan 20 13:56:10.148700 kernel: loop4: detected capacity change from 0 to 207008 Jan 20 13:56:10.186412 kernel: loop5: detected capacity change from 0 to 353272 Jan 20 13:56:10.189409 kernel: loop5: p1 p2 p3 Jan 20 13:56:10.233487 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 20 13:56:10.233606 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Jan 20 13:56:10.238695 kernel: device-mapper: table: 254:1: verity: Unrecognized verity feature request (-EINVAL) Jan 20 13:56:10.242136 kernel: device-mapper: ioctl: error adding target to table Jan 20 13:56:10.242255 (sd-merge)[1834]: device-mapper: reload ioctl on 21e0d131085f822d5f55463400792014eae7e81e64b01ebce2c5152576a8ca5e-verity (254:1) failed: Invalid argument Jan 20 13:56:10.253406 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 20 13:56:10.463578 kernel: erofs: (device dm-1): mounted with root inode @ nid 39. Jan 20 13:56:10.483682 kernel: loop6: detected capacity change from 0 to 161080 Jan 20 13:56:10.487442 kernel: loop6: p1 p2 p3 Jan 20 13:56:10.504902 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 20 13:56:10.505016 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Jan 20 13:56:10.510057 kernel: device-mapper: table: 254:2: verity: Unrecognized verity feature request (-EINVAL) Jan 20 13:56:10.513549 kernel: device-mapper: ioctl: error adding target to table Jan 20 13:56:10.513728 (sd-merge)[1834]: device-mapper: reload ioctl on 0794950fccbae52dbe5da93b9787c03924771d9aa1f4ec61b583bafb1175a73b-verity (254:2) failed: Invalid argument Jan 20 13:56:10.520670 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 20 13:56:10.584427 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Jan 20 13:56:10.588410 kernel: loop7: detected capacity change from 0 to 159496 Jan 20 13:56:10.592406 kernel: loop7: p1 p2 p3 Jan 20 13:56:10.650169 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 20 13:56:10.650260 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Jan 20 13:56:10.655107 kernel: device-mapper: table: 254:3: verity: Unrecognized verity feature request (-EINVAL) Jan 20 13:56:10.674332 kernel: device-mapper: ioctl: error adding target to table Jan 20 13:56:10.674683 (sd-merge)[1834]: device-mapper: reload ioctl on 2f9e2bab96f77efc91ee59539c4014b69411df741e28fc146d6226bcc68e2bf9-verity (254:3) failed: Invalid argument Jan 20 13:56:10.683644 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 20 13:56:10.789420 kernel: erofs: (device dm-3): mounted with root inode @ nid 39. Jan 20 13:56:10.795411 kernel: loop1: detected capacity change from 0 to 207008 Jan 20 13:56:10.814104 (sd-merge)[1834]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Jan 20 13:56:10.816439 (sd-merge)[1834]: Merged extensions into '/usr'. Jan 20 13:56:10.819133 systemd[1]: Reload requested from client PID 1654 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 13:56:10.819149 systemd[1]: Reloading... Jan 20 13:56:10.893469 zram_generator::config[1881]: No configuration found. Jan 20 13:56:10.947537 systemd-networkd[1707]: eth0: Gained IPv6LL Jan 20 13:56:11.260886 systemd[1]: Reloading finished in 441 ms. Jan 20 13:56:11.292046 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 13:56:11.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.298052 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 13:56:11.309455 kernel: audit: type=1130 audit(1768917371.296:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.318233 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 13:56:11.332464 kernel: audit: type=1130 audit(1768917371.313:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.342578 systemd[1]: Starting ensure-sysext.service... Jan 20 13:56:11.348583 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 13:56:11.362000 audit: BPF prog-id=31 op=LOAD Jan 20 13:56:11.362000 audit: BPF prog-id=22 op=UNLOAD Jan 20 13:56:11.362000 audit: BPF prog-id=32 op=LOAD Jan 20 13:56:11.362000 audit: BPF prog-id=33 op=LOAD Jan 20 13:56:11.362000 audit: BPF prog-id=23 op=UNLOAD Jan 20 13:56:11.362000 audit: BPF prog-id=24 op=UNLOAD Jan 20 13:56:11.362000 audit: BPF prog-id=34 op=LOAD Jan 20 13:56:11.363000 audit: BPF prog-id=21 op=UNLOAD Jan 20 13:56:11.363000 audit: BPF prog-id=35 op=LOAD Jan 20 13:56:11.363000 audit: BPF prog-id=30 op=UNLOAD Jan 20 13:56:11.364000 audit: BPF prog-id=36 op=LOAD Jan 20 13:56:11.364000 audit: BPF prog-id=15 op=UNLOAD Jan 20 13:56:11.364000 audit: BPF prog-id=37 op=LOAD Jan 20 13:56:11.364000 audit: BPF prog-id=38 op=LOAD Jan 20 13:56:11.364000 audit: BPF prog-id=16 op=UNLOAD Jan 20 13:56:11.364000 audit: BPF prog-id=17 op=UNLOAD Jan 20 13:56:11.364000 audit: BPF prog-id=39 op=LOAD Jan 20 13:56:11.364000 audit: BPF prog-id=40 op=LOAD Jan 20 13:56:11.364000 audit: BPF prog-id=28 op=UNLOAD Jan 20 13:56:11.364000 audit: BPF prog-id=29 op=UNLOAD Jan 20 13:56:11.365000 audit: BPF prog-id=41 op=LOAD Jan 20 13:56:11.365000 audit: BPF prog-id=25 op=UNLOAD Jan 20 13:56:11.365000 audit: BPF prog-id=42 op=LOAD Jan 20 13:56:11.365000 audit: BPF prog-id=43 op=LOAD Jan 20 13:56:11.365000 audit: BPF prog-id=26 op=UNLOAD Jan 20 13:56:11.365000 audit: BPF prog-id=27 op=UNLOAD Jan 20 13:56:11.365000 audit: BPF prog-id=44 op=LOAD Jan 20 13:56:11.365000 audit: BPF prog-id=18 op=UNLOAD Jan 20 13:56:11.365000 audit: BPF prog-id=45 op=LOAD Jan 20 13:56:11.365000 audit: BPF prog-id=46 op=LOAD Jan 20 13:56:11.365000 audit: BPF prog-id=19 op=UNLOAD Jan 20 13:56:11.365000 audit: BPF prog-id=20 op=UNLOAD Jan 20 13:56:11.371078 systemd[1]: Reload requested from client PID 1941 ('systemctl') (unit ensure-sysext.service)... Jan 20 13:56:11.371092 systemd[1]: Reloading... Jan 20 13:56:11.389859 systemd-tmpfiles[1942]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 13:56:11.390289 systemd-tmpfiles[1942]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 13:56:11.390641 systemd-tmpfiles[1942]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 13:56:11.391747 systemd-tmpfiles[1942]: ACLs are not supported, ignoring. Jan 20 13:56:11.391817 systemd-tmpfiles[1942]: ACLs are not supported, ignoring. Jan 20 13:56:11.435247 systemd-tmpfiles[1942]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 13:56:11.435426 systemd-tmpfiles[1942]: Skipping /boot Jan 20 13:56:11.442306 systemd-tmpfiles[1942]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 13:56:11.442543 systemd-tmpfiles[1942]: Skipping /boot Jan 20 13:56:11.448415 zram_generator::config[1974]: No configuration found. Jan 20 13:56:11.623479 systemd[1]: Reloading finished in 252 ms. Jan 20 13:56:11.644027 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 13:56:11.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.650000 audit: BPF prog-id=47 op=LOAD Jan 20 13:56:11.650000 audit: BPF prog-id=44 op=UNLOAD Jan 20 13:56:11.650000 audit: BPF prog-id=48 op=LOAD Jan 20 13:56:11.650000 audit: BPF prog-id=49 op=LOAD Jan 20 13:56:11.650000 audit: BPF prog-id=45 op=UNLOAD Jan 20 13:56:11.650000 audit: BPF prog-id=46 op=UNLOAD Jan 20 13:56:11.650000 audit: BPF prog-id=50 op=LOAD Jan 20 13:56:11.650000 audit: BPF prog-id=51 op=LOAD Jan 20 13:56:11.650000 audit: BPF prog-id=39 op=UNLOAD Jan 20 13:56:11.650000 audit: BPF prog-id=40 op=UNLOAD Jan 20 13:56:11.651000 audit: BPF prog-id=52 op=LOAD Jan 20 13:56:11.651000 audit: BPF prog-id=36 op=UNLOAD Jan 20 13:56:11.651000 audit: BPF prog-id=53 op=LOAD Jan 20 13:56:11.651000 audit: BPF prog-id=54 op=LOAD Jan 20 13:56:11.651000 audit: BPF prog-id=37 op=UNLOAD Jan 20 13:56:11.651000 audit: BPF prog-id=38 op=UNLOAD Jan 20 13:56:11.652000 audit: BPF prog-id=55 op=LOAD Jan 20 13:56:11.652000 audit: BPF prog-id=41 op=UNLOAD Jan 20 13:56:11.652000 audit: BPF prog-id=56 op=LOAD Jan 20 13:56:11.652000 audit: BPF prog-id=57 op=LOAD Jan 20 13:56:11.652000 audit: BPF prog-id=42 op=UNLOAD Jan 20 13:56:11.652000 audit: BPF prog-id=43 op=UNLOAD Jan 20 13:56:11.652000 audit: BPF prog-id=58 op=LOAD Jan 20 13:56:11.652000 audit: BPF prog-id=34 op=UNLOAD Jan 20 13:56:11.653000 audit: BPF prog-id=59 op=LOAD Jan 20 13:56:11.653000 audit: BPF prog-id=31 op=UNLOAD Jan 20 13:56:11.653000 audit: BPF prog-id=60 op=LOAD Jan 20 13:56:11.653000 audit: BPF prog-id=61 op=LOAD Jan 20 13:56:11.653000 audit: BPF prog-id=32 op=UNLOAD Jan 20 13:56:11.653000 audit: BPF prog-id=33 op=UNLOAD Jan 20 13:56:11.653000 audit: BPF prog-id=62 op=LOAD Jan 20 13:56:11.653000 audit: BPF prog-id=35 op=UNLOAD Jan 20 13:56:11.668497 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 13:56:11.682580 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 13:56:11.690299 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 13:56:11.698574 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 13:56:11.706560 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 13:56:11.716782 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 13:56:11.716000 audit[2037]: SYSTEM_BOOT pid=2037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.718631 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 13:56:11.725700 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 13:56:11.739048 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 13:56:11.744184 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 13:56:11.745154 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 13:56:11.745302 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 13:56:11.746470 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 13:56:11.746677 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 13:56:11.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.752866 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 13:56:11.753064 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 13:56:11.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.760838 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 13:56:11.762623 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 13:56:11.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.777592 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 13:56:11.779570 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 13:56:11.791605 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 13:56:11.797959 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 13:56:11.802238 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 13:56:11.802534 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 13:56:11.802667 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 13:56:11.805416 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 13:56:11.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.812222 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 13:56:11.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.817860 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 13:56:11.818047 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 13:56:11.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.823128 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 13:56:11.823281 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 13:56:11.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.829200 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 13:56:11.829596 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 13:56:11.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.840757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 13:56:11.842057 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 13:56:11.851585 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 13:56:11.857577 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 13:56:11.868615 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 13:56:11.873704 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 13:56:11.874065 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 13:56:11.874256 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 13:56:11.874533 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 13:56:11.880433 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 13:56:11.880882 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 13:56:11.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.886435 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 13:56:11.886723 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 13:56:11.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.891733 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 13:56:11.891889 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 13:56:11.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.897623 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 13:56:11.897782 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 13:56:11.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.905230 systemd[1]: Finished ensure-sysext.service. Jan 20 13:56:11.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:11.910997 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 13:56:11.911063 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 13:56:11.986000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 13:56:11.986000 audit[2079]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff9f97c30 a2=420 a3=0 items=0 ppid=2032 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:11.986000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 13:56:11.987935 augenrules[2079]: No rules Jan 20 13:56:11.989218 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 13:56:11.989613 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 13:56:12.573459 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 13:56:12.579532 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 13:56:17.321415 ldconfig[2034]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 13:56:17.332264 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 13:56:17.339127 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 13:56:17.355287 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 13:56:17.360056 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 13:56:17.364560 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 13:56:17.370262 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 13:56:17.375602 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 13:56:17.379951 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 13:56:17.385629 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 20 13:56:17.390886 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 20 13:56:17.395428 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 13:56:17.400775 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 13:56:17.400814 systemd[1]: Reached target paths.target - Path Units. Jan 20 13:56:17.404540 systemd[1]: Reached target timers.target - Timer Units. Jan 20 13:56:17.409299 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 13:56:17.415066 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 13:56:17.420507 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 13:56:17.425834 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 13:56:17.431295 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 13:56:17.437677 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 13:56:17.442199 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 13:56:17.447859 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 13:56:17.452297 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 13:56:17.456195 systemd[1]: Reached target basic.target - Basic System. Jan 20 13:56:17.459938 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 13:56:17.459968 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 13:56:17.462405 systemd[1]: Starting chronyd.service - NTP client/server... Jan 20 13:56:17.478519 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 13:56:17.486615 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 20 13:56:17.495517 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 13:56:17.501646 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 13:56:17.510898 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 13:56:17.513536 chronyd[2091]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 20 13:56:17.516732 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 13:56:17.522959 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 13:56:17.523698 jq[2096]: false Jan 20 13:56:17.524669 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 20 13:56:17.529774 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 20 13:56:17.531273 chronyd[2091]: Timezone right/UTC failed leap second check, ignoring Jan 20 13:56:17.531460 chronyd[2091]: Loaded seccomp filter (level 2) Jan 20 13:56:17.535507 KVP[2101]: KVP starting; pid is:2101 Jan 20 13:56:17.535507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 13:56:17.541208 KVP[2101]: KVP LIC Version: 3.1 Jan 20 13:56:17.543549 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 13:56:17.544429 kernel: hv_utils: KVP IC version 4.0 Jan 20 13:56:17.548720 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 13:56:17.556502 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 13:56:17.565540 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 13:56:17.575584 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 13:56:17.577933 extend-filesystems[2100]: Found /dev/sda6 Jan 20 13:56:17.586739 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 13:56:17.592743 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 13:56:17.597075 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 13:56:17.599991 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 13:56:17.608501 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 13:56:17.614898 systemd[1]: Started chronyd.service - NTP client/server. Jan 20 13:56:17.619731 jq[2128]: true Jan 20 13:56:17.619959 extend-filesystems[2100]: Found /dev/sda9 Jan 20 13:56:17.624944 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 13:56:17.634891 extend-filesystems[2100]: Checking size of /dev/sda9 Jan 20 13:56:17.638806 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 13:56:17.640544 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 13:56:17.641770 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 13:56:17.642461 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 13:56:17.654812 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 13:56:17.655043 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 13:56:17.659163 extend-filesystems[2100]: Resized partition /dev/sda9 Jan 20 13:56:17.684756 extend-filesystems[2141]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 13:56:17.704735 update_engine[2120]: I20260120 13:56:17.674039 2120 main.cc:92] Flatcar Update Engine starting Jan 20 13:56:17.694128 systemd-logind[2119]: New seat seat0. Jan 20 13:56:17.696590 systemd-logind[2119]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 20 13:56:17.696840 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 13:56:17.705325 jq[2142]: true Jan 20 13:56:17.717070 kernel: EXT4-fs (sda9): resizing filesystem from 6359552 to 6376955 blocks Jan 20 13:56:17.717166 kernel: EXT4-fs (sda9): resized filesystem to 6376955 Jan 20 13:56:17.715446 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 13:56:17.747924 extend-filesystems[2141]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 20 13:56:17.747924 extend-filesystems[2141]: old_desc_blocks = 4, new_desc_blocks = 4 Jan 20 13:56:17.747924 extend-filesystems[2141]: The filesystem on /dev/sda9 is now 6376955 (4k) blocks long. Jan 20 13:56:17.776555 extend-filesystems[2100]: Resized filesystem in /dev/sda9 Jan 20 13:56:17.755424 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 13:56:17.785707 sshd_keygen[2130]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 13:56:17.755882 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 13:56:17.795803 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 13:56:17.802569 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 13:56:17.811260 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 20 13:56:17.842663 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 13:56:17.842913 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 13:56:17.845014 tar[2138]: linux-arm64/LICENSE Jan 20 13:56:17.845619 tar[2138]: linux-arm64/helm Jan 20 13:56:17.852095 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 13:56:17.874704 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 20 13:56:17.886039 bash[2193]: Updated "/home/core/.ssh/authorized_keys" Jan 20 13:56:17.885681 dbus-daemon[2094]: [system] SELinux support is enabled Jan 20 13:56:17.886038 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 13:56:17.895601 update_engine[2120]: I20260120 13:56:17.895541 2120 update_check_scheduler.cc:74] Next update check in 10m39s Jan 20 13:56:17.897696 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 13:56:17.914708 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 13:56:17.914906 dbus-daemon[2094]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 13:56:17.914798 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 13:56:17.914821 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 13:56:17.926666 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 13:56:17.926925 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 13:56:17.937975 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 13:56:17.947806 systemd[1]: Started update-engine.service - Update Engine. Jan 20 13:56:17.952566 coreos-metadata[2093]: Jan 20 13:56:17.952 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 20 13:56:17.957570 coreos-metadata[2093]: Jan 20 13:56:17.957 INFO Fetch successful Jan 20 13:56:17.957570 coreos-metadata[2093]: Jan 20 13:56:17.957 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 20 13:56:17.962488 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 13:56:17.968542 coreos-metadata[2093]: Jan 20 13:56:17.966 INFO Fetch successful Jan 20 13:56:17.968542 coreos-metadata[2093]: Jan 20 13:56:17.966 INFO Fetching http://168.63.129.16/machine/5872b0ba-b0b2-4ff7-8af6-6c7734cfdf26/bca5824e%2Ddb75%2D4dd0%2D8214%2Df59f85f05c16.%5Fci%2D9999.1.1%2Df%2D6b32856eb5?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 20 13:56:17.968542 coreos-metadata[2093]: Jan 20 13:56:17.967 INFO Fetch successful Jan 20 13:56:17.968542 coreos-metadata[2093]: Jan 20 13:56:17.967 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 20 13:56:17.972112 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 20 13:56:17.979794 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 13:56:17.989671 coreos-metadata[2093]: Jan 20 13:56:17.984 INFO Fetch successful Jan 20 13:56:17.990792 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 13:56:18.059494 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 20 13:56:18.068934 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 13:56:18.140094 locksmithd[2261]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 13:56:18.290870 tar[2138]: linux-arm64/README.md Jan 20 13:56:18.303895 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 13:56:18.542563 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 13:56:18.601895 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 13:56:18.946313 kubelet[2290]: E0120 13:56:18.946263 2290 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 13:56:18.948607 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 13:56:18.948838 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 13:56:18.949380 systemd[1]: kubelet.service: Consumed 549ms CPU time, 256.1M memory peak. Jan 20 13:56:19.564396 containerd[2143]: time="2026-01-20T13:56:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 13:56:19.565676 containerd[2143]: time="2026-01-20T13:56:19.565636448Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 20 13:56:19.573211 containerd[2143]: time="2026-01-20T13:56:19.573165096Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.384µs" Jan 20 13:56:19.573263 containerd[2143]: time="2026-01-20T13:56:19.573209088Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 13:56:19.573263 containerd[2143]: time="2026-01-20T13:56:19.573256472Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 13:56:19.573287 containerd[2143]: time="2026-01-20T13:56:19.573265664Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 13:56:19.573716 containerd[2143]: time="2026-01-20T13:56:19.573678776Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 13:56:19.573716 containerd[2143]: time="2026-01-20T13:56:19.573714144Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 13:56:19.573804 containerd[2143]: time="2026-01-20T13:56:19.573786344Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 13:56:19.573804 containerd[2143]: time="2026-01-20T13:56:19.573800008Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 13:56:19.574020 containerd[2143]: time="2026-01-20T13:56:19.573999808Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 13:56:19.574020 containerd[2143]: time="2026-01-20T13:56:19.574017768Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 13:56:19.574045 containerd[2143]: time="2026-01-20T13:56:19.574028648Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 13:56:19.574045 containerd[2143]: time="2026-01-20T13:56:19.574034176Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 13:56:19.574205 containerd[2143]: time="2026-01-20T13:56:19.574188680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 13:56:19.574252 containerd[2143]: time="2026-01-20T13:56:19.574240536Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 13:56:19.574380 containerd[2143]: time="2026-01-20T13:56:19.574364432Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 13:56:19.574440 containerd[2143]: time="2026-01-20T13:56:19.574425424Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 13:56:19.574440 containerd[2143]: time="2026-01-20T13:56:19.574437040Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 13:56:19.574637 containerd[2143]: time="2026-01-20T13:56:19.574619240Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 13:56:19.574863 containerd[2143]: time="2026-01-20T13:56:19.574838888Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 13:56:19.574936 containerd[2143]: time="2026-01-20T13:56:19.574920128Z" level=info msg="metadata content store policy set" policy=shared Jan 20 13:56:19.598533 containerd[2143]: time="2026-01-20T13:56:19.598480160Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 13:56:19.598598 containerd[2143]: time="2026-01-20T13:56:19.598554144Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 13:56:19.599038 containerd[2143]: time="2026-01-20T13:56:19.599010464Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 13:56:19.599038 containerd[2143]: time="2026-01-20T13:56:19.599032920Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 13:56:19.599075 containerd[2143]: time="2026-01-20T13:56:19.599047456Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 13:56:19.599075 containerd[2143]: time="2026-01-20T13:56:19.599055576Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 13:56:19.599075 containerd[2143]: time="2026-01-20T13:56:19.599063856Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 13:56:19.599166 containerd[2143]: time="2026-01-20T13:56:19.599079304Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 13:56:19.599166 containerd[2143]: time="2026-01-20T13:56:19.599088312Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 13:56:19.599166 containerd[2143]: time="2026-01-20T13:56:19.599096800Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 13:56:19.599166 containerd[2143]: time="2026-01-20T13:56:19.599103416Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 13:56:19.599166 containerd[2143]: time="2026-01-20T13:56:19.599110008Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 13:56:19.599166 containerd[2143]: time="2026-01-20T13:56:19.599116864Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 13:56:19.599166 containerd[2143]: time="2026-01-20T13:56:19.599127000Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 13:56:19.599297 containerd[2143]: time="2026-01-20T13:56:19.599269272Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 13:56:19.599297 containerd[2143]: time="2026-01-20T13:56:19.599294152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 13:56:19.599322 containerd[2143]: time="2026-01-20T13:56:19.599305368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 13:56:19.599322 containerd[2143]: time="2026-01-20T13:56:19.599312560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 13:56:19.599322 containerd[2143]: time="2026-01-20T13:56:19.599319552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 13:56:19.599364 containerd[2143]: time="2026-01-20T13:56:19.599325696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 13:56:19.599364 containerd[2143]: time="2026-01-20T13:56:19.599337896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 13:56:19.599364 containerd[2143]: time="2026-01-20T13:56:19.599350600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 13:56:19.599364 containerd[2143]: time="2026-01-20T13:56:19.599357912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 13:56:19.599424 containerd[2143]: time="2026-01-20T13:56:19.599364424Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 13:56:19.599424 containerd[2143]: time="2026-01-20T13:56:19.599371096Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 13:56:19.599463 containerd[2143]: time="2026-01-20T13:56:19.599456840Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 13:56:19.599656 containerd[2143]: time="2026-01-20T13:56:19.599633408Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 13:56:19.599684 containerd[2143]: time="2026-01-20T13:56:19.599671920Z" level=info msg="Start snapshots syncer" Jan 20 13:56:19.599714 containerd[2143]: time="2026-01-20T13:56:19.599692408Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 13:56:19.600092 containerd[2143]: time="2026-01-20T13:56:19.600045512Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 13:56:19.600210 containerd[2143]: time="2026-01-20T13:56:19.600104088Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 13:56:19.600253 containerd[2143]: time="2026-01-20T13:56:19.600234296Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 13:56:19.600413 containerd[2143]: time="2026-01-20T13:56:19.600380768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 13:56:19.600436 containerd[2143]: time="2026-01-20T13:56:19.600417584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 13:56:19.600436 containerd[2143]: time="2026-01-20T13:56:19.600425888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 13:56:19.600436 containerd[2143]: time="2026-01-20T13:56:19.600433424Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 13:56:19.600488 containerd[2143]: time="2026-01-20T13:56:19.600441192Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 13:56:19.600488 containerd[2143]: time="2026-01-20T13:56:19.600453312Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 13:56:19.600488 containerd[2143]: time="2026-01-20T13:56:19.600462216Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 13:56:19.600488 containerd[2143]: time="2026-01-20T13:56:19.600468816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 13:56:19.600488 containerd[2143]: time="2026-01-20T13:56:19.600476168Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 13:56:19.600556 containerd[2143]: time="2026-01-20T13:56:19.600501048Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 13:56:19.600556 containerd[2143]: time="2026-01-20T13:56:19.600510648Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 13:56:19.600556 containerd[2143]: time="2026-01-20T13:56:19.600521448Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 13:56:19.600556 containerd[2143]: time="2026-01-20T13:56:19.600527464Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 13:56:19.600556 containerd[2143]: time="2026-01-20T13:56:19.600532280Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 13:56:19.600556 containerd[2143]: time="2026-01-20T13:56:19.600538704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 13:56:19.600556 containerd[2143]: time="2026-01-20T13:56:19.600545528Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 13:56:19.600638 containerd[2143]: time="2026-01-20T13:56:19.600560808Z" level=info msg="runtime interface created" Jan 20 13:56:19.600638 containerd[2143]: time="2026-01-20T13:56:19.600564464Z" level=info msg="created NRI interface" Jan 20 13:56:19.600638 containerd[2143]: time="2026-01-20T13:56:19.600569368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 13:56:19.600638 containerd[2143]: time="2026-01-20T13:56:19.600577784Z" level=info msg="Connect containerd service" Jan 20 13:56:19.600638 containerd[2143]: time="2026-01-20T13:56:19.600599584Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 13:56:19.601278 containerd[2143]: time="2026-01-20T13:56:19.601250208Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 13:56:20.544799 containerd[2143]: time="2026-01-20T13:56:20.544700296Z" level=info msg="Start subscribing containerd event" Jan 20 13:56:20.544799 containerd[2143]: time="2026-01-20T13:56:20.544773848Z" level=info msg="Start recovering state" Jan 20 13:56:20.545630 containerd[2143]: time="2026-01-20T13:56:20.545347560Z" level=info msg="Start event monitor" Jan 20 13:56:20.545630 containerd[2143]: time="2026-01-20T13:56:20.545373536Z" level=info msg="Start cni network conf syncer for default" Jan 20 13:56:20.545630 containerd[2143]: time="2026-01-20T13:56:20.545380968Z" level=info msg="Start streaming server" Jan 20 13:56:20.545630 containerd[2143]: time="2026-01-20T13:56:20.545456416Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 13:56:20.545630 containerd[2143]: time="2026-01-20T13:56:20.545463168Z" level=info msg="runtime interface starting up..." Jan 20 13:56:20.545630 containerd[2143]: time="2026-01-20T13:56:20.545467608Z" level=info msg="starting plugins..." Jan 20 13:56:20.545630 containerd[2143]: time="2026-01-20T13:56:20.545481504Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 13:56:20.546002 containerd[2143]: time="2026-01-20T13:56:20.545919160Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 13:56:20.546002 containerd[2143]: time="2026-01-20T13:56:20.545978232Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 13:56:20.552069 containerd[2143]: time="2026-01-20T13:56:20.546652936Z" level=info msg="containerd successfully booted in 0.985091s" Jan 20 13:56:20.547598 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 13:56:20.553748 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 13:56:20.563868 systemd[1]: Startup finished in 3.109s (kernel) + 12.953s (initrd) + 17.586s (userspace) = 33.648s. Jan 20 13:56:21.170046 login[2255]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:56:21.170749 login[2258]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:56:21.181362 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 13:56:21.184644 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 13:56:21.186708 systemd-logind[2119]: New session 1 of user core. Jan 20 13:56:21.190362 systemd-logind[2119]: New session 2 of user core. Jan 20 13:56:21.233986 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 13:56:21.236954 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 13:56:21.248728 (systemd)[2321]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:56:21.251366 systemd-logind[2119]: New session 3 of user core. Jan 20 13:56:21.341400 waagent[2227]: 2026-01-20T13:56:21.340884Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 20 13:56:21.345526 waagent[2227]: 2026-01-20T13:56:21.345472Z INFO Daemon Daemon OS: flatcar 9999.1.1 Jan 20 13:56:21.349410 waagent[2227]: 2026-01-20T13:56:21.349353Z INFO Daemon Daemon Python: 3.11.13 Jan 20 13:56:21.353483 waagent[2227]: 2026-01-20T13:56:21.352746Z INFO Daemon Daemon Run daemon Jan 20 13:56:21.356038 waagent[2227]: 2026-01-20T13:56:21.355999Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='9999.1.1' Jan 20 13:56:21.363180 waagent[2227]: 2026-01-20T13:56:21.362913Z INFO Daemon Daemon Using waagent for provisioning Jan 20 13:56:21.367615 waagent[2227]: 2026-01-20T13:56:21.367573Z INFO Daemon Daemon Activate resource disk Jan 20 13:56:21.371080 waagent[2227]: 2026-01-20T13:56:21.371046Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 20 13:56:21.379771 waagent[2227]: 2026-01-20T13:56:21.379729Z INFO Daemon Daemon Found device: None Jan 20 13:56:21.383716 waagent[2227]: 2026-01-20T13:56:21.383684Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 20 13:56:21.391389 waagent[2227]: 2026-01-20T13:56:21.390507Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 20 13:56:21.400423 waagent[2227]: 2026-01-20T13:56:21.400358Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 20 13:56:21.405442 waagent[2227]: 2026-01-20T13:56:21.405409Z INFO Daemon Daemon Running default provisioning handler Jan 20 13:56:21.415651 waagent[2227]: 2026-01-20T13:56:21.415594Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 20 13:56:21.427432 waagent[2227]: 2026-01-20T13:56:21.427123Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 20 13:56:21.435287 waagent[2227]: 2026-01-20T13:56:21.435202Z INFO Daemon Daemon cloud-init is enabled: False Jan 20 13:56:21.436355 systemd[2321]: Queued start job for default target default.target. Jan 20 13:56:21.439069 waagent[2227]: 2026-01-20T13:56:21.439029Z INFO Daemon Daemon Copying ovf-env.xml Jan 20 13:56:21.446662 systemd[2321]: Created slice app.slice - User Application Slice. Jan 20 13:56:21.446694 systemd[2321]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 20 13:56:21.446703 systemd[2321]: Reached target paths.target - Paths. Jan 20 13:56:21.446745 systemd[2321]: Reached target timers.target - Timers. Jan 20 13:56:21.448491 systemd[2321]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 13:56:21.449561 systemd[2321]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 20 13:56:21.465237 systemd[2321]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 20 13:56:21.466352 systemd[2321]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 13:56:21.466482 systemd[2321]: Reached target sockets.target - Sockets. Jan 20 13:56:21.466530 systemd[2321]: Reached target basic.target - Basic System. Jan 20 13:56:21.466556 systemd[2321]: Reached target default.target - Main User Target. Jan 20 13:56:21.466577 systemd[2321]: Startup finished in 210ms. Jan 20 13:56:21.466864 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 13:56:21.472624 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 13:56:21.473593 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 13:56:21.483611 waagent[2227]: 2026-01-20T13:56:21.483285Z INFO Daemon Daemon Successfully mounted dvd Jan 20 13:56:21.543897 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 20 13:56:21.547416 waagent[2227]: 2026-01-20T13:56:21.545967Z INFO Daemon Daemon Detect protocol endpoint Jan 20 13:56:21.549979 waagent[2227]: 2026-01-20T13:56:21.549934Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 20 13:56:21.554545 waagent[2227]: 2026-01-20T13:56:21.554488Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 20 13:56:21.559678 waagent[2227]: 2026-01-20T13:56:21.559639Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 20 13:56:21.566007 waagent[2227]: 2026-01-20T13:56:21.565676Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 20 13:56:21.570234 waagent[2227]: 2026-01-20T13:56:21.570181Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 20 13:56:21.648018 waagent[2227]: 2026-01-20T13:56:21.643170Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 20 13:56:21.648461 waagent[2227]: 2026-01-20T13:56:21.648435Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 20 13:56:21.652584 waagent[2227]: 2026-01-20T13:56:21.652550Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 20 13:56:21.733767 waagent[2227]: 2026-01-20T13:56:21.733629Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 20 13:56:21.739654 waagent[2227]: 2026-01-20T13:56:21.739591Z INFO Daemon Daemon Forcing an update of the goal state. Jan 20 13:56:21.747790 waagent[2227]: 2026-01-20T13:56:21.747735Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 20 13:56:21.769455 waagent[2227]: 2026-01-20T13:56:21.769372Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 20 13:56:21.774010 waagent[2227]: 2026-01-20T13:56:21.773958Z INFO Daemon Jan 20 13:56:21.776278 waagent[2227]: 2026-01-20T13:56:21.776238Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 85012185-a2e4-4c72-aa51-255e73ee6fda eTag: 4410410366315667552 source: Fabric] Jan 20 13:56:21.784647 waagent[2227]: 2026-01-20T13:56:21.784607Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 20 13:56:21.789651 waagent[2227]: 2026-01-20T13:56:21.789614Z INFO Daemon Jan 20 13:56:21.791990 waagent[2227]: 2026-01-20T13:56:21.791956Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 20 13:56:21.801076 waagent[2227]: 2026-01-20T13:56:21.801041Z INFO Daemon Daemon Downloading artifacts profile blob Jan 20 13:56:21.864112 waagent[2227]: 2026-01-20T13:56:21.864048Z INFO Daemon Downloaded certificate {'thumbprint': 'CFF6BEFC4E4A7E9D313D1B47755EC070BB493194', 'hasPrivateKey': True} Jan 20 13:56:21.871682 waagent[2227]: 2026-01-20T13:56:21.871628Z INFO Daemon Fetch goal state completed Jan 20 13:56:21.882839 waagent[2227]: 2026-01-20T13:56:21.882793Z INFO Daemon Daemon Starting provisioning Jan 20 13:56:21.887272 waagent[2227]: 2026-01-20T13:56:21.887225Z INFO Daemon Daemon Handle ovf-env.xml. Jan 20 13:56:21.890871 waagent[2227]: 2026-01-20T13:56:21.890829Z INFO Daemon Daemon Set hostname [ci-9999.1.1-f-6b32856eb5] Jan 20 13:56:21.897337 waagent[2227]: 2026-01-20T13:56:21.897294Z INFO Daemon Daemon Publish hostname [ci-9999.1.1-f-6b32856eb5] Jan 20 13:56:21.902743 waagent[2227]: 2026-01-20T13:56:21.902697Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 20 13:56:21.907578 waagent[2227]: 2026-01-20T13:56:21.907536Z INFO Daemon Daemon Primary interface is [eth0] Jan 20 13:56:21.918417 systemd-networkd[1707]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 13:56:21.918671 systemd-networkd[1707]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Jan 20 13:56:21.918813 systemd-networkd[1707]: eth0: DHCP lease lost Jan 20 13:56:21.939225 waagent[2227]: 2026-01-20T13:56:21.939152Z INFO Daemon Daemon Create user account if not exists Jan 20 13:56:21.943852 waagent[2227]: 2026-01-20T13:56:21.943797Z INFO Daemon Daemon User core already exists, skip useradd Jan 20 13:56:21.948312 waagent[2227]: 2026-01-20T13:56:21.948255Z INFO Daemon Daemon Configure sudoer Jan 20 13:56:21.953453 systemd-networkd[1707]: eth0: DHCPv4 address 10.200.20.32/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 20 13:56:21.957929 waagent[2227]: 2026-01-20T13:56:21.957875Z INFO Daemon Daemon Configure sshd Jan 20 13:56:21.964680 waagent[2227]: 2026-01-20T13:56:21.964629Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 20 13:56:21.974078 waagent[2227]: 2026-01-20T13:56:21.974021Z INFO Daemon Daemon Deploy ssh public key. Jan 20 13:56:23.069007 waagent[2227]: 2026-01-20T13:56:23.068932Z INFO Daemon Daemon Provisioning complete Jan 20 13:56:23.084201 waagent[2227]: 2026-01-20T13:56:23.084150Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 20 13:56:23.089686 waagent[2227]: 2026-01-20T13:56:23.089629Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 20 13:56:23.097217 waagent[2227]: 2026-01-20T13:56:23.097161Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 20 13:56:23.202168 waagent[2373]: 2026-01-20T13:56:23.202076Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 20 13:56:23.202526 waagent[2373]: 2026-01-20T13:56:23.202224Z INFO ExtHandler ExtHandler OS: flatcar 9999.1.1 Jan 20 13:56:23.202526 waagent[2373]: 2026-01-20T13:56:23.202266Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 20 13:56:23.202526 waagent[2373]: 2026-01-20T13:56:23.202303Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jan 20 13:56:23.579371 waagent[2373]: 2026-01-20T13:56:23.579280Z INFO ExtHandler ExtHandler Distro: flatcar-9999.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 20 13:56:23.579577 waagent[2373]: 2026-01-20T13:56:23.579540Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 20 13:56:23.579631 waagent[2373]: 2026-01-20T13:56:23.579609Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 20 13:56:23.586073 waagent[2373]: 2026-01-20T13:56:23.586017Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 20 13:56:23.591707 waagent[2373]: 2026-01-20T13:56:23.591668Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 20 13:56:23.592138 waagent[2373]: 2026-01-20T13:56:23.592103Z INFO ExtHandler Jan 20 13:56:23.592194 waagent[2373]: 2026-01-20T13:56:23.592175Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 97a7711d-3592-491f-ae17-9ce6db98c162 eTag: 4410410366315667552 source: Fabric] Jan 20 13:56:23.592459 waagent[2373]: 2026-01-20T13:56:23.592428Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 20 13:56:23.592906 waagent[2373]: 2026-01-20T13:56:23.592874Z INFO ExtHandler Jan 20 13:56:23.592947 waagent[2373]: 2026-01-20T13:56:23.592930Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 20 13:56:23.601065 waagent[2373]: 2026-01-20T13:56:23.601028Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 20 13:56:23.658587 waagent[2373]: 2026-01-20T13:56:23.658508Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CFF6BEFC4E4A7E9D313D1B47755EC070BB493194', 'hasPrivateKey': True} Jan 20 13:56:23.659014 waagent[2373]: 2026-01-20T13:56:23.658977Z INFO ExtHandler Fetch goal state completed Jan 20 13:56:23.672896 waagent[2373]: 2026-01-20T13:56:23.672832Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.4 30 Sep 2025 (Library: OpenSSL 3.5.4 30 Sep 2025) Jan 20 13:56:23.676848 waagent[2373]: 2026-01-20T13:56:23.676801Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2373 Jan 20 13:56:23.676971 waagent[2373]: 2026-01-20T13:56:23.676943Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 20 13:56:23.677232 waagent[2373]: 2026-01-20T13:56:23.677203Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 20 13:56:23.678400 waagent[2373]: 2026-01-20T13:56:23.678357Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '9999.1.1', '', 'Flatcar Container Linux by Kinvolk'] Jan 20 13:56:23.678784 waagent[2373]: 2026-01-20T13:56:23.678749Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '9999.1.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 20 13:56:23.678913 waagent[2373]: 2026-01-20T13:56:23.678887Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 20 13:56:23.679353 waagent[2373]: 2026-01-20T13:56:23.679321Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 20 13:56:23.697621 waagent[2373]: 2026-01-20T13:56:23.697581Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 20 13:56:23.697808 waagent[2373]: 2026-01-20T13:56:23.697779Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 20 13:56:23.702399 waagent[2373]: 2026-01-20T13:56:23.702358Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 20 13:56:23.707349 systemd[1]: Reload requested from client PID 2389 ('systemctl') (unit waagent.service)... Jan 20 13:56:23.707595 systemd[1]: Reloading... Jan 20 13:56:23.796494 zram_generator::config[2427]: No configuration found. Jan 20 13:56:23.971912 systemd[1]: Reloading finished in 264 ms. Jan 20 13:56:23.985402 waagent[2373]: 2026-01-20T13:56:23.985264Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 20 13:56:23.986411 waagent[2373]: 2026-01-20T13:56:23.986072Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 20 13:56:24.216693 waagent[2373]: 2026-01-20T13:56:24.216625Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 20 13:56:24.217360 waagent[2373]: 2026-01-20T13:56:24.217313Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 20 13:56:24.218215 waagent[2373]: 2026-01-20T13:56:24.218153Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 20 13:56:24.218372 waagent[2373]: 2026-01-20T13:56:24.218279Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 20 13:56:24.218463 waagent[2373]: 2026-01-20T13:56:24.218439Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 20 13:56:24.218650 waagent[2373]: 2026-01-20T13:56:24.218622Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 20 13:56:24.219027 waagent[2373]: 2026-01-20T13:56:24.218969Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 20 13:56:24.219148 waagent[2373]: 2026-01-20T13:56:24.219115Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 20 13:56:24.219148 waagent[2373]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 20 13:56:24.219148 waagent[2373]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 20 13:56:24.219148 waagent[2373]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 20 13:56:24.219148 waagent[2373]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 20 13:56:24.219148 waagent[2373]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 20 13:56:24.219148 waagent[2373]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 20 13:56:24.219267 waagent[2373]: 2026-01-20T13:56:24.219190Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 20 13:56:24.219267 waagent[2373]: 2026-01-20T13:56:24.219237Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 20 13:56:24.219350 waagent[2373]: 2026-01-20T13:56:24.219323Z INFO EnvHandler ExtHandler Configure routes Jan 20 13:56:24.219586 waagent[2373]: 2026-01-20T13:56:24.219371Z INFO EnvHandler ExtHandler Gateway:None Jan 20 13:56:24.219652 waagent[2373]: 2026-01-20T13:56:24.219623Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 20 13:56:24.219735 waagent[2373]: 2026-01-20T13:56:24.219691Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 20 13:56:24.219925 waagent[2373]: 2026-01-20T13:56:24.219900Z INFO EnvHandler ExtHandler Routes:None Jan 20 13:56:24.220499 waagent[2373]: 2026-01-20T13:56:24.220457Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 20 13:56:24.220665 waagent[2373]: 2026-01-20T13:56:24.220630Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 20 13:56:24.220811 waagent[2373]: 2026-01-20T13:56:24.220785Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 20 13:56:24.227968 waagent[2373]: 2026-01-20T13:56:24.227875Z INFO ExtHandler ExtHandler Jan 20 13:56:24.227968 waagent[2373]: 2026-01-20T13:56:24.227958Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: fd5b794a-b1ab-4c7b-9c3e-b2fe624209d7 correlation dd7fdeca-081b-456d-b0de-4625ce9f7b5d created: 2026-01-20T13:55:26.240692Z] Jan 20 13:56:24.228285 waagent[2373]: 2026-01-20T13:56:24.228244Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 20 13:56:24.228721 waagent[2373]: 2026-01-20T13:56:24.228691Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jan 20 13:56:24.267017 waagent[2373]: 2026-01-20T13:56:24.266940Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 20 13:56:24.267017 waagent[2373]: Try `iptables -h' or 'iptables --help' for more information.) Jan 20 13:56:24.267402 waagent[2373]: 2026-01-20T13:56:24.267361Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 73B34862-692E-460F-8BA7-495FB2D9A481;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 20 13:56:24.342662 waagent[2373]: 2026-01-20T13:56:24.342599Z INFO MonitorHandler ExtHandler Network interfaces: Jan 20 13:56:24.342662 waagent[2373]: Executing ['ip', '-a', '-o', 'link']: Jan 20 13:56:24.342662 waagent[2373]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 20 13:56:24.342662 waagent[2373]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f7:ab:79 brd ff:ff:ff:ff:ff:ff\ altname enx000d3af7ab79 Jan 20 13:56:24.342662 waagent[2373]: 3: enP5045s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f7:ab:79 brd ff:ff:ff:ff:ff:ff\ altname enP5045p0s2 Jan 20 13:56:24.342662 waagent[2373]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 20 13:56:24.342662 waagent[2373]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 20 13:56:24.342662 waagent[2373]: 2: eth0 inet 10.200.20.32/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 20 13:56:24.342662 waagent[2373]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 20 13:56:24.342662 waagent[2373]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 20 13:56:24.342662 waagent[2373]: 2: eth0 inet6 fe80::20d:3aff:fef7:ab79/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 20 13:56:24.370243 waagent[2373]: 2026-01-20T13:56:24.369507Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 20 13:56:24.370243 waagent[2373]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 13:56:24.370243 waagent[2373]: pkts bytes target prot opt in out source destination Jan 20 13:56:24.370243 waagent[2373]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 20 13:56:24.370243 waagent[2373]: pkts bytes target prot opt in out source destination Jan 20 13:56:24.370243 waagent[2373]: Chain OUTPUT (policy ACCEPT 3 packets, 164 bytes) Jan 20 13:56:24.370243 waagent[2373]: pkts bytes target prot opt in out source destination Jan 20 13:56:24.370243 waagent[2373]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 20 13:56:24.370243 waagent[2373]: 7 940 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 20 13:56:24.370243 waagent[2373]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 20 13:56:24.373123 waagent[2373]: 2026-01-20T13:56:24.373080Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 20 13:56:24.373123 waagent[2373]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 13:56:24.373123 waagent[2373]: pkts bytes target prot opt in out source destination Jan 20 13:56:24.373123 waagent[2373]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 20 13:56:24.373123 waagent[2373]: pkts bytes target prot opt in out source destination Jan 20 13:56:24.373123 waagent[2373]: Chain OUTPUT (policy ACCEPT 3 packets, 164 bytes) Jan 20 13:56:24.373123 waagent[2373]: pkts bytes target prot opt in out source destination Jan 20 13:56:24.373123 waagent[2373]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 20 13:56:24.373123 waagent[2373]: 9 1052 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 20 13:56:24.373123 waagent[2373]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 20 13:56:24.373623 waagent[2373]: 2026-01-20T13:56:24.373589Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 20 13:56:29.085168 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 13:56:29.087015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 13:56:29.195490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 13:56:29.202766 (kubelet)[2526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 13:56:29.306125 kubelet[2526]: E0120 13:56:29.306041 2526 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 13:56:29.308823 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 13:56:29.308942 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 13:56:29.309522 systemd[1]: kubelet.service: Consumed 112ms CPU time, 107.2M memory peak. Jan 20 13:56:39.335181 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 13:56:39.337077 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 13:56:39.449735 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 13:56:39.455826 (kubelet)[2540]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 13:56:39.574053 kubelet[2540]: E0120 13:56:39.574002 2540 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 13:56:39.576171 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 13:56:39.576284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 13:56:39.576811 systemd[1]: kubelet.service: Consumed 110ms CPU time, 107.1M memory peak. Jan 20 13:56:41.337099 chronyd[2091]: Selected source PHC0 Jan 20 13:56:42.873147 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 13:56:42.874321 systemd[1]: Started sshd@0-10.200.20.32:22-10.200.16.10:44486.service - OpenSSH per-connection server daemon (10.200.16.10:44486). Jan 20 13:56:43.428438 sshd[2548]: Accepted publickey for core from 10.200.16.10 port 44486 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 13:56:43.429534 sshd-session[2548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:56:43.434439 systemd-logind[2119]: New session 4 of user core. Jan 20 13:56:43.440763 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 13:56:43.745652 systemd[1]: Started sshd@1-10.200.20.32:22-10.200.16.10:44488.service - OpenSSH per-connection server daemon (10.200.16.10:44488). Jan 20 13:56:44.171793 sshd[2555]: Accepted publickey for core from 10.200.16.10 port 44488 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 13:56:44.172977 sshd-session[2555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:56:44.176939 systemd-logind[2119]: New session 5 of user core. Jan 20 13:56:44.189535 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 13:56:44.407519 sshd[2559]: Connection closed by 10.200.16.10 port 44488 Jan 20 13:56:44.407432 sshd-session[2555]: pam_unix(sshd:session): session closed for user core Jan 20 13:56:44.412889 systemd-logind[2119]: Session 5 logged out. Waiting for processes to exit. Jan 20 13:56:44.413068 systemd[1]: sshd@1-10.200.20.32:22-10.200.16.10:44488.service: Deactivated successfully. Jan 20 13:56:44.414371 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 13:56:44.415855 systemd-logind[2119]: Removed session 5. Jan 20 13:56:44.492256 systemd[1]: Started sshd@2-10.200.20.32:22-10.200.16.10:44500.service - OpenSSH per-connection server daemon (10.200.16.10:44500). Jan 20 13:56:44.878436 sshd[2565]: Accepted publickey for core from 10.200.16.10 port 44500 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 13:56:44.879447 sshd-session[2565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:56:44.883382 systemd-logind[2119]: New session 6 of user core. Jan 20 13:56:44.894540 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 13:56:45.098123 sshd[2569]: Connection closed by 10.200.16.10 port 44500 Jan 20 13:56:45.097652 sshd-session[2565]: pam_unix(sshd:session): session closed for user core Jan 20 13:56:45.100986 systemd[1]: sshd@2-10.200.20.32:22-10.200.16.10:44500.service: Deactivated successfully. Jan 20 13:56:45.102378 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 13:56:45.103833 systemd-logind[2119]: Session 6 logged out. Waiting for processes to exit. Jan 20 13:56:45.104751 systemd-logind[2119]: Removed session 6. Jan 20 13:56:45.177071 systemd[1]: Started sshd@3-10.200.20.32:22-10.200.16.10:44508.service - OpenSSH per-connection server daemon (10.200.16.10:44508). Jan 20 13:56:45.567464 sshd[2575]: Accepted publickey for core from 10.200.16.10 port 44508 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 13:56:45.568407 sshd-session[2575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:56:45.572237 systemd-logind[2119]: New session 7 of user core. Jan 20 13:56:45.579703 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 13:56:45.780373 sshd[2579]: Connection closed by 10.200.16.10 port 44508 Jan 20 13:56:45.780917 sshd-session[2575]: pam_unix(sshd:session): session closed for user core Jan 20 13:56:45.784303 systemd[1]: sshd@3-10.200.20.32:22-10.200.16.10:44508.service: Deactivated successfully. Jan 20 13:56:45.785857 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 13:56:45.787101 systemd-logind[2119]: Session 7 logged out. Waiting for processes to exit. Jan 20 13:56:45.788243 systemd-logind[2119]: Removed session 7. Jan 20 13:56:45.873128 systemd[1]: Started sshd@4-10.200.20.32:22-10.200.16.10:44522.service - OpenSSH per-connection server daemon (10.200.16.10:44522). Jan 20 13:56:46.300957 sshd[2585]: Accepted publickey for core from 10.200.16.10 port 44522 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 13:56:46.302094 sshd-session[2585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:56:46.306373 systemd-logind[2119]: New session 8 of user core. Jan 20 13:56:46.313522 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 13:56:46.550479 sudo[2590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 13:56:46.550703 sudo[2590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 13:56:46.582769 sudo[2590]: pam_unix(sudo:session): session closed for user root Jan 20 13:56:46.659506 sshd[2589]: Connection closed by 10.200.16.10 port 44522 Jan 20 13:56:46.660266 sshd-session[2585]: pam_unix(sshd:session): session closed for user core Jan 20 13:56:46.664006 systemd[1]: sshd@4-10.200.20.32:22-10.200.16.10:44522.service: Deactivated successfully. Jan 20 13:56:46.666601 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 13:56:46.667245 systemd-logind[2119]: Session 8 logged out. Waiting for processes to exit. Jan 20 13:56:46.668254 systemd-logind[2119]: Removed session 8. Jan 20 13:56:46.752292 systemd[1]: Started sshd@5-10.200.20.32:22-10.200.16.10:44532.service - OpenSSH per-connection server daemon (10.200.16.10:44532). Jan 20 13:56:47.138778 sshd[2597]: Accepted publickey for core from 10.200.16.10 port 44532 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 13:56:47.139952 sshd-session[2597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:56:47.144335 systemd-logind[2119]: New session 9 of user core. Jan 20 13:56:47.151557 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 13:56:47.285537 sudo[2603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 13:56:47.285756 sudo[2603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 13:56:47.291812 sudo[2603]: pam_unix(sudo:session): session closed for user root Jan 20 13:56:47.297218 sudo[2602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 13:56:47.297459 sudo[2602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 13:56:47.303564 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 13:56:47.342417 kernel: kauditd_printk_skb: 92 callbacks suppressed Jan 20 13:56:47.342526 kernel: audit: type=1305 audit(1768917407.336:261): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 13:56:47.336000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 13:56:47.342997 augenrules[2627]: No rules Jan 20 13:56:47.336000 audit[2627]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe6f84b90 a2=420 a3=0 items=0 ppid=2608 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:47.349276 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 13:56:47.349606 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 13:56:47.365944 kernel: audit: type=1300 audit(1768917407.336:261): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe6f84b90 a2=420 a3=0 items=0 ppid=2608 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:47.366269 sudo[2602]: pam_unix(sudo:session): session closed for user root Jan 20 13:56:47.336000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 13:56:47.374381 kernel: audit: type=1327 audit(1768917407.336:261): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 13:56:47.374451 kernel: audit: type=1130 audit(1768917407.347:262): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:47.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:47.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:47.397874 kernel: audit: type=1131 audit(1768917407.347:263): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:47.397982 kernel: audit: type=1106 audit(1768917407.365:264): pid=2602 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 13:56:47.365000 audit[2602]: USER_END pid=2602 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 13:56:47.365000 audit[2602]: CRED_DISP pid=2602 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 13:56:47.423387 kernel: audit: type=1104 audit(1768917407.365:265): pid=2602 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 13:56:47.437310 sshd[2601]: Connection closed by 10.200.16.10 port 44532 Jan 20 13:56:47.437216 sshd-session[2597]: pam_unix(sshd:session): session closed for user core Jan 20 13:56:47.436000 audit[2597]: USER_END pid=2597 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:56:47.457900 systemd[1]: sshd@5-10.200.20.32:22-10.200.16.10:44532.service: Deactivated successfully. Jan 20 13:56:47.437000 audit[2597]: CRED_DISP pid=2597 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:56:47.460187 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 13:56:47.471893 kernel: audit: type=1106 audit(1768917407.436:266): pid=2597 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:56:47.471938 kernel: audit: type=1104 audit(1768917407.437:267): pid=2597 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:56:47.471990 systemd-logind[2119]: Session 9 logged out. Waiting for processes to exit. Jan 20 13:56:47.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.32:22-10.200.16.10:44532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:47.485119 kernel: audit: type=1131 audit(1768917407.456:268): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.32:22-10.200.16.10:44532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:47.485810 systemd-logind[2119]: Removed session 9. Jan 20 13:56:47.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.32:22-10.200.16.10:44540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:47.523621 systemd[1]: Started sshd@6-10.200.20.32:22-10.200.16.10:44540.service - OpenSSH per-connection server daemon (10.200.16.10:44540). Jan 20 13:56:47.953000 audit[2636]: USER_ACCT pid=2636 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:56:47.955417 sshd[2636]: Accepted publickey for core from 10.200.16.10 port 44540 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 13:56:47.955000 audit[2636]: CRED_ACQ pid=2636 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:56:47.955000 audit[2636]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffded98a60 a2=3 a3=0 items=0 ppid=1 pid=2636 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:47.955000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 13:56:47.956870 sshd-session[2636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:56:47.961025 systemd-logind[2119]: New session 10 of user core. Jan 20 13:56:47.967555 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 13:56:47.968000 audit[2636]: USER_START pid=2636 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:56:47.970000 audit[2640]: CRED_ACQ pid=2640 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:56:48.113523 sudo[2641]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 13:56:48.113738 sudo[2641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 13:56:48.112000 audit[2641]: USER_ACCT pid=2641 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 13:56:48.112000 audit[2641]: CRED_REFR pid=2641 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 13:56:48.112000 audit[2641]: USER_START pid=2641 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 13:56:49.585128 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 13:56:49.586902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 13:56:49.772170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 13:56:49.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:49.775343 (kubelet)[2662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 13:56:49.823606 kubelet[2662]: E0120 13:56:49.823559 2662 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 13:56:49.825568 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 13:56:49.825679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 13:56:49.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 13:56:49.826257 systemd[1]: kubelet.service: Consumed 109ms CPU time, 107.9M memory peak. Jan 20 13:56:52.708908 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 13:56:52.718612 (dockerd)[2675]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 13:56:56.518543 dockerd[2675]: time="2026-01-20T13:56:56.518452997Z" level=info msg="Starting up" Jan 20 13:56:56.535232 dockerd[2675]: time="2026-01-20T13:56:56.535172654Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 13:56:56.583291 dockerd[2675]: time="2026-01-20T13:56:56.583146252Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 13:56:56.614022 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1547170301-merged.mount: Deactivated successfully. Jan 20 13:56:56.640826 dockerd[2675]: time="2026-01-20T13:56:56.640637851Z" level=info msg="Loading containers: start." Jan 20 13:56:56.668419 kernel: Initializing XFRM netlink socket Jan 20 13:56:56.719000 audit[2721]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2721 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:56.723452 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 20 13:56:56.723517 kernel: audit: type=1325 audit(1768917416.719:280): table=nat:5 family=2 entries=2 op=nft_register_chain pid=2721 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:56.719000 audit[2721]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffffe452470 a2=0 a3=0 items=0 ppid=2675 pid=2721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.749884 kernel: audit: type=1300 audit(1768917416.719:280): arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffffe452470 a2=0 a3=0 items=0 ppid=2675 pid=2721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.719000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 13:56:56.757669 kernel: audit: type=1327 audit(1768917416.719:280): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 13:56:56.725000 audit[2723]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2723 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:56.766692 kernel: audit: type=1325 audit(1768917416.725:281): table=filter:6 family=2 entries=2 op=nft_register_chain pid=2723 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:56.725000 audit[2723]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffff282b30 a2=0 a3=0 items=0 ppid=2675 pid=2723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.784225 kernel: audit: type=1300 audit(1768917416.725:281): arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffff282b30 a2=0 a3=0 items=0 ppid=2675 pid=2723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.725000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 13:56:56.792740 kernel: audit: type=1327 audit(1768917416.725:281): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 13:56:56.726000 audit[2725]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2725 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:56.801625 kernel: audit: type=1325 audit(1768917416.726:282): table=filter:7 family=2 entries=1 op=nft_register_chain pid=2725 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:56.726000 audit[2725]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc030120 a2=0 a3=0 items=0 ppid=2675 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.818788 kernel: audit: type=1300 audit(1768917416.726:282): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc030120 a2=0 a3=0 items=0 ppid=2675 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.726000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 13:56:56.828349 kernel: audit: type=1327 audit(1768917416.726:282): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 13:56:56.728000 audit[2727]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2727 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:56.837452 kernel: audit: type=1325 audit(1768917416.728:283): table=filter:8 family=2 entries=1 op=nft_register_chain pid=2727 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:56.728000 audit[2727]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7a3a990 a2=0 a3=0 items=0 ppid=2675 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.728000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 13:56:56.729000 audit[2729]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=2729 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:56.729000 audit[2729]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffb5a5dc0 a2=0 a3=0 items=0 ppid=2675 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.729000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 13:56:56.731000 audit[2731]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=2731 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:56.731000 audit[2731]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffcb9e5010 a2=0 a3=0 items=0 ppid=2675 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.731000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 13:56:56.750000 audit[2733]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2733 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:56.750000 audit[2733]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd829b6e0 a2=0 a3=0 items=0 ppid=2675 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.750000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 13:56:56.752000 audit[2735]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=2735 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:56.752000 audit[2735]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffc475b650 a2=0 a3=0 items=0 ppid=2675 pid=2735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.752000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 13:56:56.846000 audit[2738]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2738 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:56.846000 audit[2738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffdf2f9a00 a2=0 a3=0 items=0 ppid=2675 pid=2738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.846000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 20 13:56:56.847000 audit[2740]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=2740 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:56.847000 audit[2740]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffffc64e460 a2=0 a3=0 items=0 ppid=2675 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.847000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 13:56:56.849000 audit[2742]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2742 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:56.849000 audit[2742]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffcda12390 a2=0 a3=0 items=0 ppid=2675 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.849000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 13:56:56.851000 audit[2744]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2744 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:56.851000 audit[2744]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffe4e753b0 a2=0 a3=0 items=0 ppid=2675 pid=2744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.851000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 13:56:56.852000 audit[2746]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=2746 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:56.852000 audit[2746]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffe454ff30 a2=0 a3=0 items=0 ppid=2675 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.852000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 13:56:56.908000 audit[2776]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=2776 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:56:56.908000 audit[2776]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe91ebf20 a2=0 a3=0 items=0 ppid=2675 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.908000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 13:56:56.910000 audit[2778]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=2778 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:56:56.910000 audit[2778]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc034fed0 a2=0 a3=0 items=0 ppid=2675 pid=2778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.910000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 13:56:56.911000 audit[2780]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2780 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:56:56.911000 audit[2780]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc9d7eb20 a2=0 a3=0 items=0 ppid=2675 pid=2780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.911000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 13:56:56.913000 audit[2782]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2782 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:56:56.913000 audit[2782]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd8cc54c0 a2=0 a3=0 items=0 ppid=2675 pid=2782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.913000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 13:56:56.915000 audit[2784]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=2784 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:56:56.915000 audit[2784]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd3b299a0 a2=0 a3=0 items=0 ppid=2675 pid=2784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.915000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 13:56:56.916000 audit[2786]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=2786 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:56:56.916000 audit[2786]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc71352b0 a2=0 a3=0 items=0 ppid=2675 pid=2786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.916000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 13:56:56.918000 audit[2788]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=2788 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:56:56.918000 audit[2788]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdb1f1c60 a2=0 a3=0 items=0 ppid=2675 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.918000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 13:56:56.920000 audit[2790]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=2790 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:56:56.920000 audit[2790]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffdc58f1b0 a2=0 a3=0 items=0 ppid=2675 pid=2790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.920000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 13:56:56.922000 audit[2792]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=2792 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:56:56.922000 audit[2792]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffd9e20460 a2=0 a3=0 items=0 ppid=2675 pid=2792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.922000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 20 13:56:56.923000 audit[2794]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=2794 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:56:56.923000 audit[2794]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe229d0e0 a2=0 a3=0 items=0 ppid=2675 pid=2794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.923000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 13:56:56.925000 audit[2796]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=2796 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:56:56.925000 audit[2796]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffffc834ba0 a2=0 a3=0 items=0 ppid=2675 pid=2796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.925000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 13:56:56.927000 audit[2798]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=2798 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:56:56.927000 audit[2798]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffe6c60bd0 a2=0 a3=0 items=0 ppid=2675 pid=2798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.927000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 13:56:56.928000 audit[2800]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=2800 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:56:56.928000 audit[2800]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffdf176770 a2=0 a3=0 items=0 ppid=2675 pid=2800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.928000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 13:56:56.933000 audit[2805]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=2805 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:56.933000 audit[2805]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdb70f630 a2=0 a3=0 items=0 ppid=2675 pid=2805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.933000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 13:56:56.935000 audit[2807]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2807 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:56.935000 audit[2807]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffec49e090 a2=0 a3=0 items=0 ppid=2675 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.935000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 13:56:56.936000 audit[2809]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2809 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:56.936000 audit[2809]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc1c24a40 a2=0 a3=0 items=0 ppid=2675 pid=2809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.936000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 13:56:56.938000 audit[2811]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2811 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:56:56.938000 audit[2811]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffce2b9cc0 a2=0 a3=0 items=0 ppid=2675 pid=2811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.938000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 13:56:56.940000 audit[2813]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=2813 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:56:56.940000 audit[2813]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff4c33250 a2=0 a3=0 items=0 ppid=2675 pid=2813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.940000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 13:56:56.941000 audit[2815]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=2815 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:56:56.941000 audit[2815]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffdf636160 a2=0 a3=0 items=0 ppid=2675 pid=2815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:56.941000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 13:56:57.054000 audit[2820]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=2820 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:57.054000 audit[2820]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=fffff43198f0 a2=0 a3=0 items=0 ppid=2675 pid=2820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:57.054000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 20 13:56:57.057000 audit[2822]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=2822 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:57.057000 audit[2822]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd4922f50 a2=0 a3=0 items=0 ppid=2675 pid=2822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:57.057000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 20 13:56:57.063000 audit[2830]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2830 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:57.063000 audit[2830]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffc5db4720 a2=0 a3=0 items=0 ppid=2675 pid=2830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:57.063000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 20 13:56:57.068000 audit[2835]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2835 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:57.068000 audit[2835]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff66d67f0 a2=0 a3=0 items=0 ppid=2675 pid=2835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:57.068000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 20 13:56:57.070000 audit[2837]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2837 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:57.070000 audit[2837]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffd95f9ae0 a2=0 a3=0 items=0 ppid=2675 pid=2837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:57.070000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 20 13:56:57.071000 audit[2839]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2839 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:57.071000 audit[2839]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffedf2200 a2=0 a3=0 items=0 ppid=2675 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:57.071000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 20 13:56:57.073000 audit[2841]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=2841 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:57.073000 audit[2841]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffeb2ad3f0 a2=0 a3=0 items=0 ppid=2675 pid=2841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:57.073000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 13:56:57.074000 audit[2843]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=2843 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:56:57.074000 audit[2843]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff1f87a60 a2=0 a3=0 items=0 ppid=2675 pid=2843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:56:57.074000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 20 13:56:57.076907 systemd-networkd[1707]: docker0: Link UP Jan 20 13:56:57.093102 dockerd[2675]: time="2026-01-20T13:56:57.093062908Z" level=info msg="Loading containers: done." Jan 20 13:56:57.266850 dockerd[2675]: time="2026-01-20T13:56:57.266599815Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 13:56:57.266850 dockerd[2675]: time="2026-01-20T13:56:57.266694993Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 13:56:57.282227 dockerd[2675]: time="2026-01-20T13:56:57.281998051Z" level=info msg="Initializing buildkit" Jan 20 13:56:57.357913 dockerd[2675]: time="2026-01-20T13:56:57.357831512Z" level=info msg="Completed buildkit initialization" Jan 20 13:56:57.363396 dockerd[2675]: time="2026-01-20T13:56:57.363343819Z" level=info msg="Daemon has completed initialization" Jan 20 13:56:57.363501 dockerd[2675]: time="2026-01-20T13:56:57.363410485Z" level=info msg="API listen on /run/docker.sock" Jan 20 13:56:57.364895 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 13:56:57.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:57.461934 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 20 13:56:57.610581 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3258866144-merged.mount: Deactivated successfully. Jan 20 13:56:58.025512 containerd[2143]: time="2026-01-20T13:56:58.025380033Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 13:56:58.893712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1153120009.mount: Deactivated successfully. Jan 20 13:56:59.835123 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 13:56:59.837561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 13:56:59.840278 containerd[2143]: time="2026-01-20T13:56:59.839624853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:56:59.844552 containerd[2143]: time="2026-01-20T13:56:59.844510894Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=24977907" Jan 20 13:56:59.853145 containerd[2143]: time="2026-01-20T13:56:59.853117016Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:56:59.857931 containerd[2143]: time="2026-01-20T13:56:59.857900462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:56:59.858421 containerd[2143]: time="2026-01-20T13:56:59.858349123Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 1.832915777s" Jan 20 13:56:59.859075 containerd[2143]: time="2026-01-20T13:56:59.858947620Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 20 13:56:59.859636 containerd[2143]: time="2026-01-20T13:56:59.859621295Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 13:56:59.954563 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 13:56:59.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:56:59.961136 (kubelet)[2948]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 13:57:00.058167 kubelet[2948]: E0120 13:57:00.058109 2948 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 13:57:00.060206 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 13:57:00.060321 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 13:57:00.062466 systemd[1]: kubelet.service: Consumed 108ms CPU time, 104.6M memory peak. Jan 20 13:57:00.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 13:57:02.556760 containerd[2143]: time="2026-01-20T13:57:02.556710657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:57:02.559531 containerd[2143]: time="2026-01-20T13:57:02.559341659Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22613932" Jan 20 13:57:02.562186 containerd[2143]: time="2026-01-20T13:57:02.562156674Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:57:02.566182 containerd[2143]: time="2026-01-20T13:57:02.566148346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:57:02.566795 containerd[2143]: time="2026-01-20T13:57:02.566768467Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 2.707038353s" Jan 20 13:57:02.566894 containerd[2143]: time="2026-01-20T13:57:02.566879230Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 20 13:57:02.567463 containerd[2143]: time="2026-01-20T13:57:02.567436286Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 13:57:02.921738 update_engine[2120]: I20260120 13:57:02.921669 2120 update_attempter.cc:509] Updating boot flags... Jan 20 13:57:10.085345 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 20 13:57:10.087174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 13:57:10.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:57:10.184454 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 13:57:10.188224 kernel: kauditd_printk_skb: 113 callbacks suppressed Jan 20 13:57:10.188285 kernel: audit: type=1130 audit(1768917430.184:323): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:57:10.206640 (kubelet)[3032]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 13:57:10.231343 kubelet[3032]: E0120 13:57:10.231291 3032 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 13:57:10.233292 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 13:57:10.233519 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 13:57:10.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 13:57:10.234186 systemd[1]: kubelet.service: Consumed 105ms CPU time, 105M memory peak. Jan 20 13:57:10.246413 kernel: audit: type=1131 audit(1768917430.233:324): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 13:57:17.180779 containerd[2143]: time="2026-01-20T13:57:17.180266105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:57:17.191761 containerd[2143]: time="2026-01-20T13:57:17.191715668Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17611363" Jan 20 13:57:17.194850 containerd[2143]: time="2026-01-20T13:57:17.194827266Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:57:17.255822 containerd[2143]: time="2026-01-20T13:57:17.255777397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:57:17.256424 containerd[2143]: time="2026-01-20T13:57:17.256395024Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 14.688918881s" Jan 20 13:57:17.256424 containerd[2143]: time="2026-01-20T13:57:17.256426217Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 20 13:57:17.257045 containerd[2143]: time="2026-01-20T13:57:17.257008483Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 13:57:20.212215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2289779003.mount: Deactivated successfully. Jan 20 13:57:20.335375 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 20 13:57:20.337616 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 13:57:20.440607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 13:57:20.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:57:20.456449 kernel: audit: type=1130 audit(1768917440.439:325): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:57:20.459777 (kubelet)[3058]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 13:57:20.495184 kubelet[3058]: E0120 13:57:20.495021 3058 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 13:57:20.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 13:57:20.498568 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 13:57:20.498669 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 13:57:20.498956 systemd[1]: kubelet.service: Consumed 110ms CPU time, 104.4M memory peak. Jan 20 13:57:20.513425 kernel: audit: type=1131 audit(1768917440.497:326): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 13:57:20.941903 containerd[2143]: time="2026-01-20T13:57:20.941429419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:57:20.944098 containerd[2143]: time="2026-01-20T13:57:20.944057682Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27555003" Jan 20 13:57:20.947175 containerd[2143]: time="2026-01-20T13:57:20.947153432Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:57:20.951087 containerd[2143]: time="2026-01-20T13:57:20.950592848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:57:20.951087 containerd[2143]: time="2026-01-20T13:57:20.950848391Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 3.693800811s" Jan 20 13:57:20.951087 containerd[2143]: time="2026-01-20T13:57:20.950871960Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 20 13:57:20.951457 containerd[2143]: time="2026-01-20T13:57:20.951433065Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 13:57:21.608092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4054319342.mount: Deactivated successfully. Jan 20 13:57:22.388459 containerd[2143]: time="2026-01-20T13:57:22.388402421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:57:22.392222 containerd[2143]: time="2026-01-20T13:57:22.392175117Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=15956550" Jan 20 13:57:22.395063 containerd[2143]: time="2026-01-20T13:57:22.395035035Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:57:22.400109 containerd[2143]: time="2026-01-20T13:57:22.400074929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:57:22.400771 containerd[2143]: time="2026-01-20T13:57:22.400448876Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.448989138s" Jan 20 13:57:22.400771 containerd[2143]: time="2026-01-20T13:57:22.400474677Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 20 13:57:22.401002 containerd[2143]: time="2026-01-20T13:57:22.400874144Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 13:57:22.968258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount758169634.mount: Deactivated successfully. Jan 20 13:57:22.986836 containerd[2143]: time="2026-01-20T13:57:22.986785779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 13:57:22.989648 containerd[2143]: time="2026-01-20T13:57:22.989603726Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 13:57:22.992351 containerd[2143]: time="2026-01-20T13:57:22.992308431Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 13:57:22.996595 containerd[2143]: time="2026-01-20T13:57:22.996543133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 13:57:22.997194 containerd[2143]: time="2026-01-20T13:57:22.996749515Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 595.848858ms" Jan 20 13:57:22.997194 containerd[2143]: time="2026-01-20T13:57:22.996775772Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 20 13:57:22.997260 containerd[2143]: time="2026-01-20T13:57:22.997230241Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 13:57:23.674090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1287993357.mount: Deactivated successfully. Jan 20 13:57:25.867432 containerd[2143]: time="2026-01-20T13:57:25.867192101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:57:25.870280 containerd[2143]: time="2026-01-20T13:57:25.870240560Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=66060488" Jan 20 13:57:25.873584 containerd[2143]: time="2026-01-20T13:57:25.873541107Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:57:25.878401 containerd[2143]: time="2026-01-20T13:57:25.877726255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:57:25.878401 containerd[2143]: time="2026-01-20T13:57:25.878259143Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.880995741s" Jan 20 13:57:25.878401 containerd[2143]: time="2026-01-20T13:57:25.878284784Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 20 13:57:26.166573 waagent[2373]: 2026-01-20T13:57:26.166449Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 20 13:57:26.175251 waagent[2373]: 2026-01-20T13:57:26.175206Z INFO ExtHandler Jan 20 13:57:26.175338 waagent[2373]: 2026-01-20T13:57:26.175306Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: effaaed0-2513-41e2-ba79-d48c70e2da4b eTag: 12581561305112056824 source: Fabric] Jan 20 13:57:26.175626 waagent[2373]: 2026-01-20T13:57:26.175595Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 20 13:57:26.176116 waagent[2373]: 2026-01-20T13:57:26.176081Z INFO ExtHandler Jan 20 13:57:26.176162 waagent[2373]: 2026-01-20T13:57:26.176143Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 20 13:57:26.180104 waagent[2373]: 2026-01-20T13:57:26.180072Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 20 13:57:26.234461 waagent[2373]: 2026-01-20T13:57:26.234402Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CFF6BEFC4E4A7E9D313D1B47755EC070BB493194', 'hasPrivateKey': True} Jan 20 13:57:26.234849 waagent[2373]: 2026-01-20T13:57:26.234812Z INFO ExtHandler Fetch goal state completed Jan 20 13:57:26.235136 waagent[2373]: 2026-01-20T13:57:26.235107Z INFO ExtHandler ExtHandler Jan 20 13:57:26.235183 waagent[2373]: 2026-01-20T13:57:26.235164Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 5db74c1c-3752-476f-8f5f-49c4705646f2 correlation dd7fdeca-081b-456d-b0de-4625ce9f7b5d created: 2026-01-20T13:57:20.973862Z] Jan 20 13:57:26.235428 waagent[2373]: 2026-01-20T13:57:26.235376Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 20 13:57:26.235794 waagent[2373]: 2026-01-20T13:57:26.235767Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 20 13:57:28.148517 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 13:57:28.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:57:28.152450 systemd[1]: kubelet.service: Consumed 110ms CPU time, 104.4M memory peak. Jan 20 13:57:28.157235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 13:57:28.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:57:28.173095 kernel: audit: type=1130 audit(1768917448.147:327): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:57:28.173173 kernel: audit: type=1131 audit(1768917448.151:328): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:57:28.189897 systemd[1]: Reload requested from client PID 3208 ('systemctl') (unit session-10.scope)... Jan 20 13:57:28.189914 systemd[1]: Reloading... Jan 20 13:57:28.285413 zram_generator::config[3267]: No configuration found. Jan 20 13:57:28.455011 systemd[1]: Reloading finished in 264 ms. Jan 20 13:57:28.471000 audit: BPF prog-id=87 op=LOAD Jan 20 13:57:28.471000 audit: BPF prog-id=83 op=UNLOAD Jan 20 13:57:28.485463 kernel: audit: type=1334 audit(1768917448.471:329): prog-id=87 op=LOAD Jan 20 13:57:28.485514 kernel: audit: type=1334 audit(1768917448.471:330): prog-id=83 op=UNLOAD Jan 20 13:57:28.477000 audit: BPF prog-id=88 op=LOAD Jan 20 13:57:28.489727 kernel: audit: type=1334 audit(1768917448.477:331): prog-id=88 op=LOAD Jan 20 13:57:28.477000 audit: BPF prog-id=73 op=UNLOAD Jan 20 13:57:28.493773 kernel: audit: type=1334 audit(1768917448.477:332): prog-id=73 op=UNLOAD Jan 20 13:57:28.498210 kernel: audit: type=1334 audit(1768917448.477:333): prog-id=89 op=LOAD Jan 20 13:57:28.477000 audit: BPF prog-id=89 op=LOAD Jan 20 13:57:28.477000 audit: BPF prog-id=90 op=LOAD Jan 20 13:57:28.503145 kernel: audit: type=1334 audit(1768917448.477:334): prog-id=90 op=LOAD Jan 20 13:57:28.477000 audit: BPF prog-id=74 op=UNLOAD Jan 20 13:57:28.508118 kernel: audit: type=1334 audit(1768917448.477:335): prog-id=74 op=UNLOAD Jan 20 13:57:28.477000 audit: BPF prog-id=75 op=UNLOAD Jan 20 13:57:28.512954 kernel: audit: type=1334 audit(1768917448.477:336): prog-id=75 op=UNLOAD Jan 20 13:57:28.478000 audit: BPF prog-id=91 op=LOAD Jan 20 13:57:28.478000 audit: BPF prog-id=92 op=LOAD Jan 20 13:57:28.478000 audit: BPF prog-id=68 op=UNLOAD Jan 20 13:57:28.478000 audit: BPF prog-id=69 op=UNLOAD Jan 20 13:57:28.478000 audit: BPF prog-id=93 op=LOAD Jan 20 13:57:28.478000 audit: BPF prog-id=76 op=UNLOAD Jan 20 13:57:28.478000 audit: BPF prog-id=94 op=LOAD Jan 20 13:57:28.479000 audit: BPF prog-id=95 op=LOAD Jan 20 13:57:28.479000 audit: BPF prog-id=77 op=UNLOAD Jan 20 13:57:28.479000 audit: BPF prog-id=78 op=UNLOAD Jan 20 13:57:28.479000 audit: BPF prog-id=96 op=LOAD Jan 20 13:57:28.479000 audit: BPF prog-id=67 op=UNLOAD Jan 20 13:57:28.479000 audit: BPF prog-id=97 op=LOAD Jan 20 13:57:28.492000 audit: BPF prog-id=84 op=UNLOAD Jan 20 13:57:28.492000 audit: BPF prog-id=98 op=LOAD Jan 20 13:57:28.492000 audit: BPF prog-id=99 op=LOAD Jan 20 13:57:28.492000 audit: BPF prog-id=85 op=UNLOAD Jan 20 13:57:28.492000 audit: BPF prog-id=86 op=UNLOAD Jan 20 13:57:28.496000 audit: BPF prog-id=100 op=LOAD Jan 20 13:57:28.496000 audit: BPF prog-id=70 op=UNLOAD Jan 20 13:57:28.496000 audit: BPF prog-id=101 op=LOAD Jan 20 13:57:28.496000 audit: BPF prog-id=102 op=LOAD Jan 20 13:57:28.496000 audit: BPF prog-id=71 op=UNLOAD Jan 20 13:57:28.496000 audit: BPF prog-id=72 op=UNLOAD Jan 20 13:57:28.501000 audit: BPF prog-id=103 op=LOAD Jan 20 13:57:28.501000 audit: BPF prog-id=80 op=UNLOAD Jan 20 13:57:28.501000 audit: BPF prog-id=104 op=LOAD Jan 20 13:57:28.501000 audit: BPF prog-id=105 op=LOAD Jan 20 13:57:28.501000 audit: BPF prog-id=81 op=UNLOAD Jan 20 13:57:28.501000 audit: BPF prog-id=82 op=UNLOAD Jan 20 13:57:28.506000 audit: BPF prog-id=106 op=LOAD Jan 20 13:57:28.506000 audit: BPF prog-id=79 op=UNLOAD Jan 20 13:57:28.521134 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 13:57:28.521189 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 13:57:28.521492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 13:57:28.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 13:57:28.521542 systemd[1]: kubelet.service: Consumed 75ms CPU time, 95.2M memory peak. Jan 20 13:57:28.522777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 13:57:28.725268 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 13:57:28.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:57:28.731580 (kubelet)[3325]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 13:57:28.757209 kubelet[3325]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 13:57:28.757209 kubelet[3325]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 13:57:28.757209 kubelet[3325]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 13:57:28.757209 kubelet[3325]: I0120 13:57:28.756995 3325 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 13:57:29.359269 kubelet[3325]: I0120 13:57:29.358732 3325 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 13:57:29.359269 kubelet[3325]: I0120 13:57:29.358763 3325 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 13:57:29.359650 kubelet[3325]: I0120 13:57:29.359632 3325 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 13:57:29.381187 kubelet[3325]: E0120 13:57:29.381160 3325 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" Jan 20 13:57:29.381969 kubelet[3325]: I0120 13:57:29.381939 3325 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 13:57:29.386414 kubelet[3325]: I0120 13:57:29.386398 3325 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 13:57:29.389504 kubelet[3325]: I0120 13:57:29.389429 3325 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 13:57:29.390898 kubelet[3325]: I0120 13:57:29.390862 3325 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 13:57:29.391104 kubelet[3325]: I0120 13:57:29.390976 3325 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-9999.1.1-f-6b32856eb5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 13:57:29.391245 kubelet[3325]: I0120 13:57:29.391231 3325 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 13:57:29.391293 kubelet[3325]: I0120 13:57:29.391285 3325 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 13:57:29.391477 kubelet[3325]: I0120 13:57:29.391463 3325 state_mem.go:36] "Initialized new in-memory state store" Jan 20 13:57:29.393863 kubelet[3325]: I0120 13:57:29.393844 3325 kubelet.go:446] "Attempting to sync node with API server" Jan 20 13:57:29.393938 kubelet[3325]: I0120 13:57:29.393929 3325 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 13:57:29.394017 kubelet[3325]: I0120 13:57:29.394008 3325 kubelet.go:352] "Adding apiserver pod source" Jan 20 13:57:29.394068 kubelet[3325]: I0120 13:57:29.394062 3325 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 13:57:29.396436 kubelet[3325]: W0120 13:57:29.395980 3325 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-9999.1.1-f-6b32856eb5&limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Jan 20 13:57:29.396436 kubelet[3325]: E0120 13:57:29.396031 3325 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-9999.1.1-f-6b32856eb5&limit=500&resourceVersion=0\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" Jan 20 13:57:29.396436 kubelet[3325]: W0120 13:57:29.396301 3325 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Jan 20 13:57:29.396436 kubelet[3325]: E0120 13:57:29.396327 3325 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" Jan 20 13:57:29.396632 kubelet[3325]: I0120 13:57:29.396611 3325 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 13:57:29.396945 kubelet[3325]: I0120 13:57:29.396919 3325 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 13:57:29.396983 kubelet[3325]: W0120 13:57:29.396974 3325 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 13:57:29.398143 kubelet[3325]: I0120 13:57:29.398123 3325 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 13:57:29.398211 kubelet[3325]: I0120 13:57:29.398153 3325 server.go:1287] "Started kubelet" Jan 20 13:57:29.402698 kubelet[3325]: E0120 13:57:29.402608 3325 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.32:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.32:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-9999.1.1-f-6b32856eb5.188c750c505db248 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-9999.1.1-f-6b32856eb5,UID:ci-9999.1.1-f-6b32856eb5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-9999.1.1-f-6b32856eb5,},FirstTimestamp:2026-01-20 13:57:29.398137416 +0000 UTC m=+0.664355156,LastTimestamp:2026-01-20 13:57:29.398137416 +0000 UTC m=+0.664355156,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-9999.1.1-f-6b32856eb5,}" Jan 20 13:57:29.402863 kubelet[3325]: I0120 13:57:29.402842 3325 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 13:57:29.403956 kubelet[3325]: I0120 13:57:29.403938 3325 server.go:479] "Adding debug handlers to kubelet server" Jan 20 13:57:29.405824 kubelet[3325]: I0120 13:57:29.403962 3325 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 13:57:29.405947 kubelet[3325]: I0120 13:57:29.405931 3325 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 13:57:29.407465 kubelet[3325]: I0120 13:57:29.407447 3325 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 13:57:29.407566 kubelet[3325]: I0120 13:57:29.407544 3325 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 13:57:29.407774 kubelet[3325]: I0120 13:57:29.407755 3325 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 13:57:29.409273 kubelet[3325]: I0120 13:57:29.409251 3325 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 13:57:29.409363 kubelet[3325]: I0120 13:57:29.409303 3325 reconciler.go:26] "Reconciler: start to sync state" Jan 20 13:57:29.409571 kubelet[3325]: W0120 13:57:29.409537 3325 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Jan 20 13:57:29.409621 kubelet[3325]: E0120 13:57:29.409580 3325 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" Jan 20 13:57:29.409871 kubelet[3325]: E0120 13:57:29.409842 3325 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 13:57:29.409000 audit[3337]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3337 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:29.410935 kubelet[3325]: E0120 13:57:29.410588 3325 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-9999.1.1-f-6b32856eb5\" not found" Jan 20 13:57:29.410935 kubelet[3325]: E0120 13:57:29.410880 3325 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999.1.1-f-6b32856eb5?timeout=10s\": dial tcp 10.200.20.32:6443: connect: connection refused" interval="200ms" Jan 20 13:57:29.409000 audit[3337]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc2a72290 a2=0 a3=0 items=0 ppid=3325 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.409000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 13:57:29.411136 kubelet[3325]: I0120 13:57:29.411069 3325 factory.go:221] Registration of the systemd container factory successfully Jan 20 13:57:29.411136 kubelet[3325]: I0120 13:57:29.411128 3325 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 13:57:29.412052 kubelet[3325]: I0120 13:57:29.412026 3325 factory.go:221] Registration of the containerd container factory successfully Jan 20 13:57:29.411000 audit[3338]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3338 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:29.411000 audit[3338]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffefea9de0 a2=0 a3=0 items=0 ppid=3325 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.411000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 13:57:29.414000 audit[3340]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:29.414000 audit[3340]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe88063e0 a2=0 a3=0 items=0 ppid=3325 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.414000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 13:57:29.416000 audit[3342]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3342 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:29.416000 audit[3342]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffffc000e20 a2=0 a3=0 items=0 ppid=3325 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.416000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 13:57:29.421000 audit[3345]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3345 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:29.421000 audit[3345]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffc453c8d0 a2=0 a3=0 items=0 ppid=3325 pid=3345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.421000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 20 13:57:29.423743 kubelet[3325]: I0120 13:57:29.423494 3325 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 13:57:29.422000 audit[3347]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3347 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:29.422000 audit[3347]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc24cb320 a2=0 a3=0 items=0 ppid=3325 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.422000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 13:57:29.425080 kubelet[3325]: I0120 13:57:29.424821 3325 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 13:57:29.425080 kubelet[3325]: I0120 13:57:29.424840 3325 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 13:57:29.425080 kubelet[3325]: I0120 13:57:29.424858 3325 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 13:57:29.425080 kubelet[3325]: I0120 13:57:29.424864 3325 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 13:57:29.425080 kubelet[3325]: E0120 13:57:29.424896 3325 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 13:57:29.424000 audit[3348]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3348 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:29.424000 audit[3348]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff7e9d1a0 a2=0 a3=0 items=0 ppid=3325 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.424000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 13:57:29.424000 audit[3349]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=3349 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:29.424000 audit[3349]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe65cbf60 a2=0 a3=0 items=0 ppid=3325 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.424000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 13:57:29.425000 audit[3350]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=3350 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:29.425000 audit[3350]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc7590be0 a2=0 a3=0 items=0 ppid=3325 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.425000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 13:57:29.426000 audit[3351]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3351 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:29.426000 audit[3351]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffea9f2880 a2=0 a3=0 items=0 ppid=3325 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.426000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 13:57:29.427000 audit[3352]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3352 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:29.427000 audit[3352]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd5641ad0 a2=0 a3=0 items=0 ppid=3325 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.427000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 13:57:29.428000 audit[3353]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3353 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:29.428000 audit[3353]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff77b0dd0 a2=0 a3=0 items=0 ppid=3325 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.428000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 13:57:29.430149 kubelet[3325]: W0120 13:57:29.430117 3325 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Jan 20 13:57:29.430235 kubelet[3325]: E0120 13:57:29.430220 3325 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.32:6443: connect: connection refused" logger="UnhandledError" Jan 20 13:57:29.436804 kubelet[3325]: I0120 13:57:29.436782 3325 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 13:57:29.436877 kubelet[3325]: I0120 13:57:29.436798 3325 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 13:57:29.436877 kubelet[3325]: I0120 13:57:29.436832 3325 state_mem.go:36] "Initialized new in-memory state store" Jan 20 13:57:29.445759 kubelet[3325]: I0120 13:57:29.445734 3325 policy_none.go:49] "None policy: Start" Jan 20 13:57:29.445759 kubelet[3325]: I0120 13:57:29.445756 3325 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 13:57:29.445759 kubelet[3325]: I0120 13:57:29.445766 3325 state_mem.go:35] "Initializing new in-memory state store" Jan 20 13:57:29.456237 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 13:57:29.467879 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 13:57:29.471449 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 13:57:29.479074 kubelet[3325]: I0120 13:57:29.479049 3325 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 13:57:29.479236 kubelet[3325]: I0120 13:57:29.479217 3325 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 13:57:29.479272 kubelet[3325]: I0120 13:57:29.479235 3325 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 13:57:29.479819 kubelet[3325]: I0120 13:57:29.479711 3325 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 13:57:29.481253 kubelet[3325]: E0120 13:57:29.481095 3325 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 13:57:29.481253 kubelet[3325]: E0120 13:57:29.481129 3325 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-9999.1.1-f-6b32856eb5\" not found" Jan 20 13:57:29.535324 systemd[1]: Created slice kubepods-burstable-pod279dd82ea4a07af0168c65a691279e53.slice - libcontainer container kubepods-burstable-pod279dd82ea4a07af0168c65a691279e53.slice. Jan 20 13:57:29.545968 kubelet[3325]: E0120 13:57:29.545913 3325 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.1.1-f-6b32856eb5\" not found" node="ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:29.548514 systemd[1]: Created slice kubepods-burstable-pod065547cd70e8b14211b03b82e23da5fc.slice - libcontainer container kubepods-burstable-pod065547cd70e8b14211b03b82e23da5fc.slice. Jan 20 13:57:29.550064 kubelet[3325]: E0120 13:57:29.549935 3325 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.1.1-f-6b32856eb5\" not found" node="ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:29.552210 systemd[1]: Created slice kubepods-burstable-podea243c7eaa4bfdeb29c3d710d81dd024.slice - libcontainer container kubepods-burstable-podea243c7eaa4bfdeb29c3d710d81dd024.slice. Jan 20 13:57:29.553622 kubelet[3325]: E0120 13:57:29.553599 3325 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.1.1-f-6b32856eb5\" not found" node="ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:29.581302 kubelet[3325]: I0120 13:57:29.580946 3325 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:29.581302 kubelet[3325]: E0120 13:57:29.581248 3325 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.32:6443/api/v1/nodes\": dial tcp 10.200.20.32:6443: connect: connection refused" node="ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:29.609899 kubelet[3325]: I0120 13:57:29.609793 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/279dd82ea4a07af0168c65a691279e53-kubeconfig\") pod \"kube-scheduler-ci-9999.1.1-f-6b32856eb5\" (UID: \"279dd82ea4a07af0168c65a691279e53\") " pod="kube-system/kube-scheduler-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:29.610105 kubelet[3325]: I0120 13:57:29.610049 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/065547cd70e8b14211b03b82e23da5fc-ca-certs\") pod \"kube-apiserver-ci-9999.1.1-f-6b32856eb5\" (UID: \"065547cd70e8b14211b03b82e23da5fc\") " pod="kube-system/kube-apiserver-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:29.610105 kubelet[3325]: I0120 13:57:29.610072 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea243c7eaa4bfdeb29c3d710d81dd024-ca-certs\") pod \"kube-controller-manager-ci-9999.1.1-f-6b32856eb5\" (UID: \"ea243c7eaa4bfdeb29c3d710d81dd024\") " pod="kube-system/kube-controller-manager-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:29.610105 kubelet[3325]: I0120 13:57:29.610083 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ea243c7eaa4bfdeb29c3d710d81dd024-flexvolume-dir\") pod \"kube-controller-manager-ci-9999.1.1-f-6b32856eb5\" (UID: \"ea243c7eaa4bfdeb29c3d710d81dd024\") " pod="kube-system/kube-controller-manager-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:29.610267 kubelet[3325]: I0120 13:57:29.610215 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea243c7eaa4bfdeb29c3d710d81dd024-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-9999.1.1-f-6b32856eb5\" (UID: \"ea243c7eaa4bfdeb29c3d710d81dd024\") " pod="kube-system/kube-controller-manager-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:29.610267 kubelet[3325]: I0120 13:57:29.610236 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/065547cd70e8b14211b03b82e23da5fc-k8s-certs\") pod \"kube-apiserver-ci-9999.1.1-f-6b32856eb5\" (UID: \"065547cd70e8b14211b03b82e23da5fc\") " pod="kube-system/kube-apiserver-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:29.610382 kubelet[3325]: I0120 13:57:29.610365 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/065547cd70e8b14211b03b82e23da5fc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-9999.1.1-f-6b32856eb5\" (UID: \"065547cd70e8b14211b03b82e23da5fc\") " pod="kube-system/kube-apiserver-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:29.610382 kubelet[3325]: I0120 13:57:29.610424 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea243c7eaa4bfdeb29c3d710d81dd024-k8s-certs\") pod \"kube-controller-manager-ci-9999.1.1-f-6b32856eb5\" (UID: \"ea243c7eaa4bfdeb29c3d710d81dd024\") " pod="kube-system/kube-controller-manager-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:29.610382 kubelet[3325]: I0120 13:57:29.610442 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea243c7eaa4bfdeb29c3d710d81dd024-kubeconfig\") pod \"kube-controller-manager-ci-9999.1.1-f-6b32856eb5\" (UID: \"ea243c7eaa4bfdeb29c3d710d81dd024\") " pod="kube-system/kube-controller-manager-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:29.612159 kubelet[3325]: E0120 13:57:29.612123 3325 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999.1.1-f-6b32856eb5?timeout=10s\": dial tcp 10.200.20.32:6443: connect: connection refused" interval="400ms" Jan 20 13:57:29.783454 kubelet[3325]: I0120 13:57:29.783090 3325 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:29.783454 kubelet[3325]: E0120 13:57:29.783360 3325 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.32:6443/api/v1/nodes\": dial tcp 10.200.20.32:6443: connect: connection refused" node="ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:29.848105 containerd[2143]: time="2026-01-20T13:57:29.848045816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-9999.1.1-f-6b32856eb5,Uid:279dd82ea4a07af0168c65a691279e53,Namespace:kube-system,Attempt:0,}" Jan 20 13:57:29.851145 containerd[2143]: time="2026-01-20T13:57:29.851111006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-9999.1.1-f-6b32856eb5,Uid:065547cd70e8b14211b03b82e23da5fc,Namespace:kube-system,Attempt:0,}" Jan 20 13:57:29.855341 containerd[2143]: time="2026-01-20T13:57:29.855310221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-9999.1.1-f-6b32856eb5,Uid:ea243c7eaa4bfdeb29c3d710d81dd024,Namespace:kube-system,Attempt:0,}" Jan 20 13:57:29.929499 containerd[2143]: time="2026-01-20T13:57:29.929337960Z" level=info msg="connecting to shim e090705c310489471e1255e1e80069ec516180829190e23cc672b819f2898e10" address="unix:///run/containerd/s/2627ff59a2d8c49f1a705be85bee21d96bd321fedce74b6517dbcbdb6307359b" namespace=k8s.io protocol=ttrpc version=3 Jan 20 13:57:29.942405 containerd[2143]: time="2026-01-20T13:57:29.942145109Z" level=info msg="connecting to shim 8666c4c133e8f012f8fdf792c31b0cb971c300d1eaf2d25f854e8fa01d16a961" address="unix:///run/containerd/s/abfc6c19f565da879b8ec74b3190b0ef0c8fc985dc75167adcf9041d5d866f65" namespace=k8s.io protocol=ttrpc version=3 Jan 20 13:57:29.946780 containerd[2143]: time="2026-01-20T13:57:29.946754881Z" level=info msg="connecting to shim 9fe28a8633e77cbb9871ba501509469c307dfa566cf965d4d5bceb234e3a8205" address="unix:///run/containerd/s/86b6fc7cff1f52625de087e80281352f58ab2478fc4121293b892cfc24e2b25c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 13:57:29.960622 systemd[1]: Started cri-containerd-e090705c310489471e1255e1e80069ec516180829190e23cc672b819f2898e10.scope - libcontainer container e090705c310489471e1255e1e80069ec516180829190e23cc672b819f2898e10. Jan 20 13:57:29.978524 systemd[1]: Started cri-containerd-8666c4c133e8f012f8fdf792c31b0cb971c300d1eaf2d25f854e8fa01d16a961.scope - libcontainer container 8666c4c133e8f012f8fdf792c31b0cb971c300d1eaf2d25f854e8fa01d16a961. Jan 20 13:57:29.979291 systemd[1]: Started cri-containerd-9fe28a8633e77cbb9871ba501509469c307dfa566cf965d4d5bceb234e3a8205.scope - libcontainer container 9fe28a8633e77cbb9871ba501509469c307dfa566cf965d4d5bceb234e3a8205. Jan 20 13:57:29.982000 audit: BPF prog-id=107 op=LOAD Jan 20 13:57:29.983000 audit: BPF prog-id=108 op=LOAD Jan 20 13:57:29.983000 audit[3379]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3364 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530393037303563333130343839343731653132353565316538303036 Jan 20 13:57:29.983000 audit: BPF prog-id=108 op=UNLOAD Jan 20 13:57:29.983000 audit[3379]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3364 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530393037303563333130343839343731653132353565316538303036 Jan 20 13:57:29.983000 audit: BPF prog-id=109 op=LOAD Jan 20 13:57:29.983000 audit[3379]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3364 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530393037303563333130343839343731653132353565316538303036 Jan 20 13:57:29.983000 audit: BPF prog-id=110 op=LOAD Jan 20 13:57:29.983000 audit[3379]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3364 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530393037303563333130343839343731653132353565316538303036 Jan 20 13:57:29.983000 audit: BPF prog-id=110 op=UNLOAD Jan 20 13:57:29.983000 audit[3379]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3364 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530393037303563333130343839343731653132353565316538303036 Jan 20 13:57:29.983000 audit: BPF prog-id=109 op=UNLOAD Jan 20 13:57:29.983000 audit[3379]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3364 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530393037303563333130343839343731653132353565316538303036 Jan 20 13:57:29.983000 audit: BPF prog-id=111 op=LOAD Jan 20 13:57:29.983000 audit[3379]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3364 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530393037303563333130343839343731653132353565316538303036 Jan 20 13:57:29.992000 audit: BPF prog-id=112 op=LOAD Jan 20 13:57:29.992000 audit: BPF prog-id=113 op=LOAD Jan 20 13:57:29.992000 audit[3429]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3407 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.992000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966653238613836333365373763626239383731626135303135303934 Jan 20 13:57:29.992000 audit: BPF prog-id=113 op=UNLOAD Jan 20 13:57:29.992000 audit[3429]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.992000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966653238613836333365373763626239383731626135303135303934 Jan 20 13:57:29.992000 audit: BPF prog-id=114 op=LOAD Jan 20 13:57:29.992000 audit[3429]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3407 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.992000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966653238613836333365373763626239383731626135303135303934 Jan 20 13:57:29.993000 audit: BPF prog-id=115 op=LOAD Jan 20 13:57:29.993000 audit[3429]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3407 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966653238613836333365373763626239383731626135303135303934 Jan 20 13:57:29.993000 audit: BPF prog-id=115 op=UNLOAD Jan 20 13:57:29.993000 audit[3429]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966653238613836333365373763626239383731626135303135303934 Jan 20 13:57:29.993000 audit: BPF prog-id=114 op=UNLOAD Jan 20 13:57:29.993000 audit[3429]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966653238613836333365373763626239383731626135303135303934 Jan 20 13:57:29.993000 audit: BPF prog-id=116 op=LOAD Jan 20 13:57:29.993000 audit[3429]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3407 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:29.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966653238613836333365373763626239383731626135303135303934 Jan 20 13:57:30.001000 audit: BPF prog-id=117 op=LOAD Jan 20 13:57:30.003000 audit: BPF prog-id=118 op=LOAD Jan 20 13:57:30.003000 audit[3424]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3390 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836363663346331333365386630313266386664663739326333316230 Jan 20 13:57:30.004000 audit: BPF prog-id=118 op=UNLOAD Jan 20 13:57:30.004000 audit[3424]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3390 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.004000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836363663346331333365386630313266386664663739326333316230 Jan 20 13:57:30.005000 audit: BPF prog-id=119 op=LOAD Jan 20 13:57:30.005000 audit[3424]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3390 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836363663346331333365386630313266386664663739326333316230 Jan 20 13:57:30.005000 audit: BPF prog-id=120 op=LOAD Jan 20 13:57:30.005000 audit[3424]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3390 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836363663346331333365386630313266386664663739326333316230 Jan 20 13:57:30.005000 audit: BPF prog-id=120 op=UNLOAD Jan 20 13:57:30.005000 audit[3424]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3390 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836363663346331333365386630313266386664663739326333316230 Jan 20 13:57:30.005000 audit: BPF prog-id=119 op=UNLOAD Jan 20 13:57:30.005000 audit[3424]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3390 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836363663346331333365386630313266386664663739326333316230 Jan 20 13:57:30.005000 audit: BPF prog-id=121 op=LOAD Jan 20 13:57:30.005000 audit[3424]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3390 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836363663346331333365386630313266386664663739326333316230 Jan 20 13:57:30.013483 kubelet[3325]: E0120 13:57:30.013412 3325 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999.1.1-f-6b32856eb5?timeout=10s\": dial tcp 10.200.20.32:6443: connect: connection refused" interval="800ms" Jan 20 13:57:30.024590 containerd[2143]: time="2026-01-20T13:57:30.024549606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-9999.1.1-f-6b32856eb5,Uid:065547cd70e8b14211b03b82e23da5fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"e090705c310489471e1255e1e80069ec516180829190e23cc672b819f2898e10\"" Jan 20 13:57:30.028454 containerd[2143]: time="2026-01-20T13:57:30.028424764Z" level=info msg="CreateContainer within sandbox \"e090705c310489471e1255e1e80069ec516180829190e23cc672b819f2898e10\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 13:57:30.031001 containerd[2143]: time="2026-01-20T13:57:30.030963473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-9999.1.1-f-6b32856eb5,Uid:ea243c7eaa4bfdeb29c3d710d81dd024,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fe28a8633e77cbb9871ba501509469c307dfa566cf965d4d5bceb234e3a8205\"" Jan 20 13:57:30.034498 containerd[2143]: time="2026-01-20T13:57:30.034458843Z" level=info msg="CreateContainer within sandbox \"9fe28a8633e77cbb9871ba501509469c307dfa566cf965d4d5bceb234e3a8205\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 13:57:30.049236 containerd[2143]: time="2026-01-20T13:57:30.049205475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-9999.1.1-f-6b32856eb5,Uid:279dd82ea4a07af0168c65a691279e53,Namespace:kube-system,Attempt:0,} returns sandbox id \"8666c4c133e8f012f8fdf792c31b0cb971c300d1eaf2d25f854e8fa01d16a961\"" Jan 20 13:57:30.054629 containerd[2143]: time="2026-01-20T13:57:30.054594135Z" level=info msg="CreateContainer within sandbox \"8666c4c133e8f012f8fdf792c31b0cb971c300d1eaf2d25f854e8fa01d16a961\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 13:57:30.057177 containerd[2143]: time="2026-01-20T13:57:30.057142837Z" level=info msg="Container 19ab798875f174d0ea3cbf293e7537f7567dee6fe491ba0904b92ce0676d6708: CDI devices from CRI Config.CDIDevices: []" Jan 20 13:57:30.065555 containerd[2143]: time="2026-01-20T13:57:30.065314805Z" level=info msg="Container 6e552ddba1db00b4722a8eafce5bb8a020cee97fee1ee3a40a693868bc10bf12: CDI devices from CRI Config.CDIDevices: []" Jan 20 13:57:30.094832 containerd[2143]: time="2026-01-20T13:57:30.094789093Z" level=info msg="CreateContainer within sandbox \"e090705c310489471e1255e1e80069ec516180829190e23cc672b819f2898e10\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"19ab798875f174d0ea3cbf293e7537f7567dee6fe491ba0904b92ce0676d6708\"" Jan 20 13:57:30.095443 containerd[2143]: time="2026-01-20T13:57:30.095414088Z" level=info msg="StartContainer for \"19ab798875f174d0ea3cbf293e7537f7567dee6fe491ba0904b92ce0676d6708\"" Jan 20 13:57:30.096508 containerd[2143]: time="2026-01-20T13:57:30.096469376Z" level=info msg="connecting to shim 19ab798875f174d0ea3cbf293e7537f7567dee6fe491ba0904b92ce0676d6708" address="unix:///run/containerd/s/2627ff59a2d8c49f1a705be85bee21d96bd321fedce74b6517dbcbdb6307359b" protocol=ttrpc version=3 Jan 20 13:57:30.099812 containerd[2143]: time="2026-01-20T13:57:30.099415922Z" level=info msg="Container 360c79ba5e214890dacfde0bab658d6f3bcc7fbf262d1098fdc7f7e082210e43: CDI devices from CRI Config.CDIDevices: []" Jan 20 13:57:30.108521 containerd[2143]: time="2026-01-20T13:57:30.108487862Z" level=info msg="CreateContainer within sandbox \"9fe28a8633e77cbb9871ba501509469c307dfa566cf965d4d5bceb234e3a8205\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6e552ddba1db00b4722a8eafce5bb8a020cee97fee1ee3a40a693868bc10bf12\"" Jan 20 13:57:30.109112 containerd[2143]: time="2026-01-20T13:57:30.109067951Z" level=info msg="StartContainer for \"6e552ddba1db00b4722a8eafce5bb8a020cee97fee1ee3a40a693868bc10bf12\"" Jan 20 13:57:30.112510 containerd[2143]: time="2026-01-20T13:57:30.112080907Z" level=info msg="connecting to shim 6e552ddba1db00b4722a8eafce5bb8a020cee97fee1ee3a40a693868bc10bf12" address="unix:///run/containerd/s/86b6fc7cff1f52625de087e80281352f58ab2478fc4121293b892cfc24e2b25c" protocol=ttrpc version=3 Jan 20 13:57:30.114151 containerd[2143]: time="2026-01-20T13:57:30.114109473Z" level=info msg="CreateContainer within sandbox \"8666c4c133e8f012f8fdf792c31b0cb971c300d1eaf2d25f854e8fa01d16a961\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"360c79ba5e214890dacfde0bab658d6f3bcc7fbf262d1098fdc7f7e082210e43\"" Jan 20 13:57:30.115699 systemd[1]: Started cri-containerd-19ab798875f174d0ea3cbf293e7537f7567dee6fe491ba0904b92ce0676d6708.scope - libcontainer container 19ab798875f174d0ea3cbf293e7537f7567dee6fe491ba0904b92ce0676d6708. Jan 20 13:57:30.117672 containerd[2143]: time="2026-01-20T13:57:30.116502577Z" level=info msg="StartContainer for \"360c79ba5e214890dacfde0bab658d6f3bcc7fbf262d1098fdc7f7e082210e43\"" Jan 20 13:57:30.117672 containerd[2143]: time="2026-01-20T13:57:30.117142125Z" level=info msg="connecting to shim 360c79ba5e214890dacfde0bab658d6f3bcc7fbf262d1098fdc7f7e082210e43" address="unix:///run/containerd/s/abfc6c19f565da879b8ec74b3190b0ef0c8fc985dc75167adcf9041d5d866f65" protocol=ttrpc version=3 Jan 20 13:57:30.129000 audit: BPF prog-id=122 op=LOAD Jan 20 13:57:30.131000 audit: BPF prog-id=123 op=LOAD Jan 20 13:57:30.131000 audit[3500]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3364 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616237393838373566313734643065613363626632393365373533 Jan 20 13:57:30.132000 audit: BPF prog-id=123 op=UNLOAD Jan 20 13:57:30.132000 audit[3500]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3364 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616237393838373566313734643065613363626632393365373533 Jan 20 13:57:30.133000 audit: BPF prog-id=124 op=LOAD Jan 20 13:57:30.133000 audit[3500]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3364 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616237393838373566313734643065613363626632393365373533 Jan 20 13:57:30.133000 audit: BPF prog-id=125 op=LOAD Jan 20 13:57:30.133000 audit[3500]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3364 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616237393838373566313734643065613363626632393365373533 Jan 20 13:57:30.133000 audit: BPF prog-id=125 op=UNLOAD Jan 20 13:57:30.133000 audit[3500]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3364 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616237393838373566313734643065613363626632393365373533 Jan 20 13:57:30.133000 audit: BPF prog-id=124 op=UNLOAD Jan 20 13:57:30.133000 audit[3500]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3364 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616237393838373566313734643065613363626632393365373533 Jan 20 13:57:30.133000 audit: BPF prog-id=126 op=LOAD Jan 20 13:57:30.133000 audit[3500]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3364 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616237393838373566313734643065613363626632393365373533 Jan 20 13:57:30.138560 systemd[1]: Started cri-containerd-6e552ddba1db00b4722a8eafce5bb8a020cee97fee1ee3a40a693868bc10bf12.scope - libcontainer container 6e552ddba1db00b4722a8eafce5bb8a020cee97fee1ee3a40a693868bc10bf12. Jan 20 13:57:30.142854 systemd[1]: Started cri-containerd-360c79ba5e214890dacfde0bab658d6f3bcc7fbf262d1098fdc7f7e082210e43.scope - libcontainer container 360c79ba5e214890dacfde0bab658d6f3bcc7fbf262d1098fdc7f7e082210e43. Jan 20 13:57:30.154000 audit: BPF prog-id=127 op=LOAD Jan 20 13:57:30.155000 audit: BPF prog-id=128 op=LOAD Jan 20 13:57:30.155000 audit[3520]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000220180 a2=98 a3=0 items=0 ppid=3390 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306337396261356532313438393064616366646530626162363538 Jan 20 13:57:30.156000 audit: BPF prog-id=128 op=UNLOAD Jan 20 13:57:30.156000 audit[3520]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3390 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306337396261356532313438393064616366646530626162363538 Jan 20 13:57:30.156000 audit: BPF prog-id=129 op=LOAD Jan 20 13:57:30.156000 audit[3520]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002203e8 a2=98 a3=0 items=0 ppid=3390 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306337396261356532313438393064616366646530626162363538 Jan 20 13:57:30.156000 audit: BPF prog-id=130 op=LOAD Jan 20 13:57:30.156000 audit[3520]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000220168 a2=98 a3=0 items=0 ppid=3390 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306337396261356532313438393064616366646530626162363538 Jan 20 13:57:30.156000 audit: BPF prog-id=130 op=UNLOAD Jan 20 13:57:30.156000 audit[3520]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3390 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306337396261356532313438393064616366646530626162363538 Jan 20 13:57:30.156000 audit: BPF prog-id=129 op=UNLOAD Jan 20 13:57:30.156000 audit[3520]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3390 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306337396261356532313438393064616366646530626162363538 Jan 20 13:57:30.156000 audit: BPF prog-id=131 op=LOAD Jan 20 13:57:30.156000 audit[3520]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000220648 a2=98 a3=0 items=0 ppid=3390 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306337396261356532313438393064616366646530626162363538 Jan 20 13:57:30.161000 audit: BPF prog-id=132 op=LOAD Jan 20 13:57:30.162000 audit: BPF prog-id=133 op=LOAD Jan 20 13:57:30.162000 audit[3511]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3407 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665353532646462613164623030623437323261386561666365356262 Jan 20 13:57:30.162000 audit: BPF prog-id=133 op=UNLOAD Jan 20 13:57:30.162000 audit[3511]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665353532646462613164623030623437323261386561666365356262 Jan 20 13:57:30.162000 audit: BPF prog-id=134 op=LOAD Jan 20 13:57:30.162000 audit[3511]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3407 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665353532646462613164623030623437323261386561666365356262 Jan 20 13:57:30.163000 audit: BPF prog-id=135 op=LOAD Jan 20 13:57:30.163000 audit[3511]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3407 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665353532646462613164623030623437323261386561666365356262 Jan 20 13:57:30.163000 audit: BPF prog-id=135 op=UNLOAD Jan 20 13:57:30.163000 audit[3511]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665353532646462613164623030623437323261386561666365356262 Jan 20 13:57:30.163000 audit: BPF prog-id=134 op=UNLOAD Jan 20 13:57:30.163000 audit[3511]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665353532646462613164623030623437323261386561666365356262 Jan 20 13:57:30.163000 audit: BPF prog-id=136 op=LOAD Jan 20 13:57:30.163000 audit[3511]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3407 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:30.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665353532646462613164623030623437323261386561666365356262 Jan 20 13:57:30.176070 containerd[2143]: time="2026-01-20T13:57:30.175960633Z" level=info msg="StartContainer for \"19ab798875f174d0ea3cbf293e7537f7567dee6fe491ba0904b92ce0676d6708\" returns successfully" Jan 20 13:57:30.188126 kubelet[3325]: I0120 13:57:30.187208 3325 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:30.191127 kubelet[3325]: E0120 13:57:30.191078 3325 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.32:6443/api/v1/nodes\": dial tcp 10.200.20.32:6443: connect: connection refused" node="ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:30.194653 containerd[2143]: time="2026-01-20T13:57:30.194371905Z" level=info msg="StartContainer for \"360c79ba5e214890dacfde0bab658d6f3bcc7fbf262d1098fdc7f7e082210e43\" returns successfully" Jan 20 13:57:30.209793 containerd[2143]: time="2026-01-20T13:57:30.209703083Z" level=info msg="StartContainer for \"6e552ddba1db00b4722a8eafce5bb8a020cee97fee1ee3a40a693868bc10bf12\" returns successfully" Jan 20 13:57:30.438612 kubelet[3325]: E0120 13:57:30.438271 3325 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.1.1-f-6b32856eb5\" not found" node="ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:30.439220 kubelet[3325]: E0120 13:57:30.438833 3325 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.1.1-f-6b32856eb5\" not found" node="ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:30.442227 kubelet[3325]: E0120 13:57:30.442214 3325 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.1.1-f-6b32856eb5\" not found" node="ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:30.994194 kubelet[3325]: I0120 13:57:30.993435 3325 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:31.203554 kubelet[3325]: E0120 13:57:31.203522 3325 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-9999.1.1-f-6b32856eb5\" not found" node="ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:31.646438 kubelet[3325]: I0120 13:57:31.644877 3325 kubelet_node_status.go:78] "Successfully registered node" node="ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:31.646438 kubelet[3325]: E0120 13:57:31.644908 3325 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-9999.1.1-f-6b32856eb5\": node \"ci-9999.1.1-f-6b32856eb5\" not found" Jan 20 13:57:31.646438 kubelet[3325]: I0120 13:57:31.645033 3325 apiserver.go:52] "Watching apiserver" Jan 20 13:57:31.653781 kubelet[3325]: I0120 13:57:31.653478 3325 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:31.653941 kubelet[3325]: I0120 13:57:31.653926 3325 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:31.668660 kubelet[3325]: E0120 13:57:31.668455 3325 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-9999.1.1-f-6b32856eb5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:31.668882 kubelet[3325]: E0120 13:57:31.668843 3325 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999.1.1-f-6b32856eb5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:31.709554 kubelet[3325]: I0120 13:57:31.709487 3325 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 13:57:31.714847 kubelet[3325]: I0120 13:57:31.714483 3325 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:31.716801 kubelet[3325]: E0120 13:57:31.716774 3325 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999.1.1-f-6b32856eb5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:31.716801 kubelet[3325]: I0120 13:57:31.716809 3325 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:31.718131 kubelet[3325]: E0120 13:57:31.718102 3325 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-9999.1.1-f-6b32856eb5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:31.718131 kubelet[3325]: I0120 13:57:31.718125 3325 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:31.719323 kubelet[3325]: E0120 13:57:31.719274 3325 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-9999.1.1-f-6b32856eb5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:33.281241 systemd[1]: Reload requested from client PID 3595 ('systemctl') (unit session-10.scope)... Jan 20 13:57:33.281256 systemd[1]: Reloading... Jan 20 13:57:33.368433 zram_generator::config[3648]: No configuration found. Jan 20 13:57:33.555968 systemd[1]: Reloading finished in 274 ms. Jan 20 13:57:33.578562 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 13:57:33.595221 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 13:57:33.595476 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 13:57:33.611405 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 20 13:57:33.611456 kernel: audit: type=1131 audit(1768917453.594:431): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:57:33.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:57:33.595539 systemd[1]: kubelet.service: Consumed 900ms CPU time, 127.3M memory peak. Jan 20 13:57:33.600151 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 13:57:33.609000 audit: BPF prog-id=137 op=LOAD Jan 20 13:57:33.616411 kernel: audit: type=1334 audit(1768917453.609:432): prog-id=137 op=LOAD Jan 20 13:57:33.609000 audit: BPF prog-id=106 op=UNLOAD Jan 20 13:57:33.620687 kernel: audit: type=1334 audit(1768917453.609:433): prog-id=106 op=UNLOAD Jan 20 13:57:33.610000 audit: BPF prog-id=138 op=LOAD Jan 20 13:57:33.624822 kernel: audit: type=1334 audit(1768917453.610:434): prog-id=138 op=LOAD Jan 20 13:57:33.610000 audit: BPF prog-id=87 op=UNLOAD Jan 20 13:57:33.629464 kernel: audit: type=1334 audit(1768917453.610:435): prog-id=87 op=UNLOAD Jan 20 13:57:33.614000 audit: BPF prog-id=139 op=LOAD Jan 20 13:57:33.633853 kernel: audit: type=1334 audit(1768917453.614:436): prog-id=139 op=LOAD Jan 20 13:57:33.623000 audit: BPF prog-id=140 op=LOAD Jan 20 13:57:33.638174 kernel: audit: type=1334 audit(1768917453.623:437): prog-id=140 op=LOAD Jan 20 13:57:33.623000 audit: BPF prog-id=91 op=UNLOAD Jan 20 13:57:33.642405 kernel: audit: type=1334 audit(1768917453.623:438): prog-id=91 op=UNLOAD Jan 20 13:57:33.623000 audit: BPF prog-id=92 op=UNLOAD Jan 20 13:57:33.646868 kernel: audit: type=1334 audit(1768917453.623:439): prog-id=92 op=UNLOAD Jan 20 13:57:33.627000 audit: BPF prog-id=141 op=LOAD Jan 20 13:57:33.651204 kernel: audit: type=1334 audit(1768917453.627:440): prog-id=141 op=LOAD Jan 20 13:57:33.627000 audit: BPF prog-id=97 op=UNLOAD Jan 20 13:57:33.628000 audit: BPF prog-id=142 op=LOAD Jan 20 13:57:33.628000 audit: BPF prog-id=143 op=LOAD Jan 20 13:57:33.628000 audit: BPF prog-id=98 op=UNLOAD Jan 20 13:57:33.628000 audit: BPF prog-id=99 op=UNLOAD Jan 20 13:57:33.632000 audit: BPF prog-id=144 op=LOAD Jan 20 13:57:33.632000 audit: BPF prog-id=100 op=UNLOAD Jan 20 13:57:33.632000 audit: BPF prog-id=145 op=LOAD Jan 20 13:57:33.636000 audit: BPF prog-id=146 op=LOAD Jan 20 13:57:33.636000 audit: BPF prog-id=101 op=UNLOAD Jan 20 13:57:33.636000 audit: BPF prog-id=102 op=UNLOAD Jan 20 13:57:33.641000 audit: BPF prog-id=147 op=LOAD Jan 20 13:57:33.641000 audit: BPF prog-id=96 op=UNLOAD Jan 20 13:57:33.645000 audit: BPF prog-id=148 op=LOAD Jan 20 13:57:33.645000 audit: BPF prog-id=93 op=UNLOAD Jan 20 13:57:33.649000 audit: BPF prog-id=149 op=LOAD Jan 20 13:57:33.649000 audit: BPF prog-id=150 op=LOAD Jan 20 13:57:33.649000 audit: BPF prog-id=94 op=UNLOAD Jan 20 13:57:33.649000 audit: BPF prog-id=95 op=UNLOAD Jan 20 13:57:33.650000 audit: BPF prog-id=151 op=LOAD Jan 20 13:57:33.650000 audit: BPF prog-id=88 op=UNLOAD Jan 20 13:57:33.650000 audit: BPF prog-id=152 op=LOAD Jan 20 13:57:33.650000 audit: BPF prog-id=153 op=LOAD Jan 20 13:57:33.650000 audit: BPF prog-id=89 op=UNLOAD Jan 20 13:57:33.650000 audit: BPF prog-id=90 op=UNLOAD Jan 20 13:57:33.651000 audit: BPF prog-id=154 op=LOAD Jan 20 13:57:33.651000 audit: BPF prog-id=103 op=UNLOAD Jan 20 13:57:33.651000 audit: BPF prog-id=155 op=LOAD Jan 20 13:57:33.651000 audit: BPF prog-id=156 op=LOAD Jan 20 13:57:33.651000 audit: BPF prog-id=104 op=UNLOAD Jan 20 13:57:33.651000 audit: BPF prog-id=105 op=UNLOAD Jan 20 13:57:33.752716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 13:57:33.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:57:33.763786 (kubelet)[3709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 13:57:33.789107 kubelet[3709]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 13:57:33.789379 kubelet[3709]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 13:57:33.789436 kubelet[3709]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 13:57:33.789612 kubelet[3709]: I0120 13:57:33.789562 3709 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 13:57:33.794262 kubelet[3709]: I0120 13:57:33.794229 3709 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 13:57:33.794262 kubelet[3709]: I0120 13:57:33.794256 3709 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 13:57:33.794477 kubelet[3709]: I0120 13:57:33.794457 3709 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 13:57:33.795347 kubelet[3709]: I0120 13:57:33.795328 3709 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 13:57:33.796949 kubelet[3709]: I0120 13:57:33.796771 3709 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 13:57:33.799981 kubelet[3709]: I0120 13:57:33.799965 3709 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 13:57:33.802678 kubelet[3709]: I0120 13:57:33.802657 3709 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 13:57:33.802963 kubelet[3709]: I0120 13:57:33.802935 3709 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 13:57:33.803143 kubelet[3709]: I0120 13:57:33.803017 3709 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-9999.1.1-f-6b32856eb5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 13:57:33.803349 kubelet[3709]: I0120 13:57:33.803328 3709 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 13:57:33.803432 kubelet[3709]: I0120 13:57:33.803423 3709 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 13:57:33.803616 kubelet[3709]: I0120 13:57:33.803594 3709 state_mem.go:36] "Initialized new in-memory state store" Jan 20 13:57:33.803736 kubelet[3709]: I0120 13:57:33.803719 3709 kubelet.go:446] "Attempting to sync node with API server" Jan 20 13:57:33.803736 kubelet[3709]: I0120 13:57:33.803734 3709 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 13:57:33.803782 kubelet[3709]: I0120 13:57:33.803751 3709 kubelet.go:352] "Adding apiserver pod source" Jan 20 13:57:33.803782 kubelet[3709]: I0120 13:57:33.803760 3709 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 13:57:33.804842 kubelet[3709]: I0120 13:57:33.804808 3709 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 13:57:33.805128 kubelet[3709]: I0120 13:57:33.805108 3709 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 13:57:33.805753 kubelet[3709]: I0120 13:57:33.805732 3709 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 13:57:33.805798 kubelet[3709]: I0120 13:57:33.805760 3709 server.go:1287] "Started kubelet" Jan 20 13:57:33.809122 kubelet[3709]: I0120 13:57:33.808129 3709 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 13:57:33.813399 kubelet[3709]: I0120 13:57:33.812269 3709 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 13:57:33.813399 kubelet[3709]: I0120 13:57:33.812878 3709 server.go:479] "Adding debug handlers to kubelet server" Jan 20 13:57:33.813682 kubelet[3709]: I0120 13:57:33.813581 3709 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 13:57:33.813764 kubelet[3709]: I0120 13:57:33.813750 3709 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 13:57:33.813957 kubelet[3709]: I0120 13:57:33.813897 3709 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 13:57:33.816285 kubelet[3709]: I0120 13:57:33.816259 3709 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 13:57:33.816546 kubelet[3709]: E0120 13:57:33.816458 3709 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-9999.1.1-f-6b32856eb5\" not found" Jan 20 13:57:33.823708 kubelet[3709]: I0120 13:57:33.823640 3709 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 13:57:33.825646 kubelet[3709]: I0120 13:57:33.825618 3709 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 13:57:33.825735 kubelet[3709]: I0120 13:57:33.825717 3709 reconciler.go:26] "Reconciler: start to sync state" Jan 20 13:57:33.829188 kubelet[3709]: I0120 13:57:33.829155 3709 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 13:57:33.830374 kubelet[3709]: I0120 13:57:33.830345 3709 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 13:57:33.830374 kubelet[3709]: I0120 13:57:33.830366 3709 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 13:57:33.830374 kubelet[3709]: I0120 13:57:33.830379 3709 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 13:57:33.830374 kubelet[3709]: I0120 13:57:33.830401 3709 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 13:57:33.830516 kubelet[3709]: E0120 13:57:33.830434 3709 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 13:57:33.839324 kubelet[3709]: I0120 13:57:33.839302 3709 factory.go:221] Registration of the containerd container factory successfully Jan 20 13:57:33.839324 kubelet[3709]: I0120 13:57:33.839318 3709 factory.go:221] Registration of the systemd container factory successfully Jan 20 13:57:33.857437 kubelet[3709]: E0120 13:57:33.857096 3709 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 13:57:33.886642 kubelet[3709]: I0120 13:57:33.886613 3709 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 13:57:33.886642 kubelet[3709]: I0120 13:57:33.886631 3709 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 13:57:33.886642 kubelet[3709]: I0120 13:57:33.886649 3709 state_mem.go:36] "Initialized new in-memory state store" Jan 20 13:57:33.886783 kubelet[3709]: I0120 13:57:33.886764 3709 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 13:57:33.886801 kubelet[3709]: I0120 13:57:33.886778 3709 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 13:57:33.886801 kubelet[3709]: I0120 13:57:33.886791 3709 policy_none.go:49] "None policy: Start" Jan 20 13:57:33.886801 kubelet[3709]: I0120 13:57:33.886798 3709 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 13:57:33.886866 kubelet[3709]: I0120 13:57:33.886806 3709 state_mem.go:35] "Initializing new in-memory state store" Jan 20 13:57:33.886880 kubelet[3709]: I0120 13:57:33.886869 3709 state_mem.go:75] "Updated machine memory state" Jan 20 13:57:33.890773 kubelet[3709]: I0120 13:57:33.890635 3709 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 13:57:33.891202 kubelet[3709]: I0120 13:57:33.891180 3709 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 13:57:33.891347 kubelet[3709]: I0120 13:57:33.891199 3709 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 13:57:33.892413 kubelet[3709]: I0120 13:57:33.892302 3709 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 13:57:33.892980 kubelet[3709]: E0120 13:57:33.892962 3709 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 13:57:33.931395 kubelet[3709]: I0120 13:57:33.931340 3709 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:33.931730 kubelet[3709]: I0120 13:57:33.931706 3709 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:33.931940 kubelet[3709]: I0120 13:57:33.931921 3709 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:33.939508 kubelet[3709]: W0120 13:57:33.939477 3709 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 13:57:33.949616 kubelet[3709]: W0120 13:57:33.949583 3709 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 13:57:33.950491 kubelet[3709]: W0120 13:57:33.950472 3709 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 13:57:33.993849 kubelet[3709]: I0120 13:57:33.993818 3709 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:34.006864 kubelet[3709]: I0120 13:57:34.006822 3709 kubelet_node_status.go:124] "Node was previously registered" node="ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:34.006986 kubelet[3709]: I0120 13:57:34.006906 3709 kubelet_node_status.go:78] "Successfully registered node" node="ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:34.126756 kubelet[3709]: I0120 13:57:34.126704 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ea243c7eaa4bfdeb29c3d710d81dd024-flexvolume-dir\") pod \"kube-controller-manager-ci-9999.1.1-f-6b32856eb5\" (UID: \"ea243c7eaa4bfdeb29c3d710d81dd024\") " pod="kube-system/kube-controller-manager-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:34.126756 kubelet[3709]: I0120 13:57:34.126734 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea243c7eaa4bfdeb29c3d710d81dd024-k8s-certs\") pod \"kube-controller-manager-ci-9999.1.1-f-6b32856eb5\" (UID: \"ea243c7eaa4bfdeb29c3d710d81dd024\") " pod="kube-system/kube-controller-manager-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:34.127098 kubelet[3709]: I0120 13:57:34.126963 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea243c7eaa4bfdeb29c3d710d81dd024-kubeconfig\") pod \"kube-controller-manager-ci-9999.1.1-f-6b32856eb5\" (UID: \"ea243c7eaa4bfdeb29c3d710d81dd024\") " pod="kube-system/kube-controller-manager-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:34.127098 kubelet[3709]: I0120 13:57:34.126992 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/279dd82ea4a07af0168c65a691279e53-kubeconfig\") pod \"kube-scheduler-ci-9999.1.1-f-6b32856eb5\" (UID: \"279dd82ea4a07af0168c65a691279e53\") " pod="kube-system/kube-scheduler-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:34.127098 kubelet[3709]: I0120 13:57:34.127028 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/065547cd70e8b14211b03b82e23da5fc-ca-certs\") pod \"kube-apiserver-ci-9999.1.1-f-6b32856eb5\" (UID: \"065547cd70e8b14211b03b82e23da5fc\") " pod="kube-system/kube-apiserver-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:34.127098 kubelet[3709]: I0120 13:57:34.127039 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/065547cd70e8b14211b03b82e23da5fc-k8s-certs\") pod \"kube-apiserver-ci-9999.1.1-f-6b32856eb5\" (UID: \"065547cd70e8b14211b03b82e23da5fc\") " pod="kube-system/kube-apiserver-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:34.127098 kubelet[3709]: I0120 13:57:34.127051 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/065547cd70e8b14211b03b82e23da5fc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-9999.1.1-f-6b32856eb5\" (UID: \"065547cd70e8b14211b03b82e23da5fc\") " pod="kube-system/kube-apiserver-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:34.127208 kubelet[3709]: I0120 13:57:34.127064 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea243c7eaa4bfdeb29c3d710d81dd024-ca-certs\") pod \"kube-controller-manager-ci-9999.1.1-f-6b32856eb5\" (UID: \"ea243c7eaa4bfdeb29c3d710d81dd024\") " pod="kube-system/kube-controller-manager-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:34.127296 kubelet[3709]: I0120 13:57:34.127265 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea243c7eaa4bfdeb29c3d710d81dd024-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-9999.1.1-f-6b32856eb5\" (UID: \"ea243c7eaa4bfdeb29c3d710d81dd024\") " pod="kube-system/kube-controller-manager-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:34.810158 kubelet[3709]: I0120 13:57:34.809909 3709 apiserver.go:52] "Watching apiserver" Jan 20 13:57:34.825902 kubelet[3709]: I0120 13:57:34.825866 3709 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 13:57:34.870898 kubelet[3709]: I0120 13:57:34.870381 3709 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:34.870898 kubelet[3709]: I0120 13:57:34.870591 3709 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:34.889738 kubelet[3709]: W0120 13:57:34.889702 3709 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 13:57:34.889866 kubelet[3709]: E0120 13:57:34.889754 3709 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-9999.1.1-f-6b32856eb5\" already exists" pod="kube-system/kube-apiserver-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:34.890038 kubelet[3709]: W0120 13:57:34.889917 3709 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 13:57:34.890038 kubelet[3709]: E0120 13:57:34.889984 3709 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999.1.1-f-6b32856eb5\" already exists" pod="kube-system/kube-scheduler-ci-9999.1.1-f-6b32856eb5" Jan 20 13:57:34.901123 kubelet[3709]: I0120 13:57:34.901059 3709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-9999.1.1-f-6b32856eb5" podStartSLOduration=1.9010453379999999 podStartE2EDuration="1.901045338s" podCreationTimestamp="2026-01-20 13:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 13:57:34.900781338 +0000 UTC m=+1.134222537" watchObservedRunningTime="2026-01-20 13:57:34.901045338 +0000 UTC m=+1.134486537" Jan 20 13:57:34.901263 kubelet[3709]: I0120 13:57:34.901162 3709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-9999.1.1-f-6b32856eb5" podStartSLOduration=1.9011587890000001 podStartE2EDuration="1.901158789s" podCreationTimestamp="2026-01-20 13:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 13:57:34.890300539 +0000 UTC m=+1.123741730" watchObservedRunningTime="2026-01-20 13:57:34.901158789 +0000 UTC m=+1.134599980" Jan 20 13:57:34.924311 kubelet[3709]: I0120 13:57:34.924254 3709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-9999.1.1-f-6b32856eb5" podStartSLOduration=1.924240051 podStartE2EDuration="1.924240051s" podCreationTimestamp="2026-01-20 13:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 13:57:34.911311018 +0000 UTC m=+1.144752209" watchObservedRunningTime="2026-01-20 13:57:34.924240051 +0000 UTC m=+1.157681242" Jan 20 13:57:38.849308 kubelet[3709]: I0120 13:57:38.849184 3709 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 13:57:38.850422 containerd[2143]: time="2026-01-20T13:57:38.850009725Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 13:57:38.850651 kubelet[3709]: I0120 13:57:38.850188 3709 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 13:57:39.597799 systemd[1]: Created slice kubepods-besteffort-pod479516f9_6b17_4ab9_93a3_cd10dc04eaf3.slice - libcontainer container kubepods-besteffort-pod479516f9_6b17_4ab9_93a3_cd10dc04eaf3.slice. Jan 20 13:57:39.654758 kubelet[3709]: I0120 13:57:39.654630 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vccsz\" (UniqueName: \"kubernetes.io/projected/479516f9-6b17-4ab9-93a3-cd10dc04eaf3-kube-api-access-vccsz\") pod \"kube-proxy-f4l5b\" (UID: \"479516f9-6b17-4ab9-93a3-cd10dc04eaf3\") " pod="kube-system/kube-proxy-f4l5b" Jan 20 13:57:39.654758 kubelet[3709]: I0120 13:57:39.654679 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/479516f9-6b17-4ab9-93a3-cd10dc04eaf3-kube-proxy\") pod \"kube-proxy-f4l5b\" (UID: \"479516f9-6b17-4ab9-93a3-cd10dc04eaf3\") " pod="kube-system/kube-proxy-f4l5b" Jan 20 13:57:39.654758 kubelet[3709]: I0120 13:57:39.654691 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/479516f9-6b17-4ab9-93a3-cd10dc04eaf3-xtables-lock\") pod \"kube-proxy-f4l5b\" (UID: \"479516f9-6b17-4ab9-93a3-cd10dc04eaf3\") " pod="kube-system/kube-proxy-f4l5b" Jan 20 13:57:39.654758 kubelet[3709]: I0120 13:57:39.654703 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/479516f9-6b17-4ab9-93a3-cd10dc04eaf3-lib-modules\") pod \"kube-proxy-f4l5b\" (UID: \"479516f9-6b17-4ab9-93a3-cd10dc04eaf3\") " pod="kube-system/kube-proxy-f4l5b" Jan 20 13:57:39.760436 kubelet[3709]: E0120 13:57:39.760380 3709 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 20 13:57:39.760436 kubelet[3709]: E0120 13:57:39.760425 3709 projected.go:194] Error preparing data for projected volume kube-api-access-vccsz for pod kube-system/kube-proxy-f4l5b: configmap "kube-root-ca.crt" not found Jan 20 13:57:39.760593 kubelet[3709]: E0120 13:57:39.760478 3709 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/479516f9-6b17-4ab9-93a3-cd10dc04eaf3-kube-api-access-vccsz podName:479516f9-6b17-4ab9-93a3-cd10dc04eaf3 nodeName:}" failed. No retries permitted until 2026-01-20 13:57:40.260460907 +0000 UTC m=+6.493902098 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vccsz" (UniqueName: "kubernetes.io/projected/479516f9-6b17-4ab9-93a3-cd10dc04eaf3-kube-api-access-vccsz") pod "kube-proxy-f4l5b" (UID: "479516f9-6b17-4ab9-93a3-cd10dc04eaf3") : configmap "kube-root-ca.crt" not found Jan 20 13:57:39.929783 systemd[1]: Created slice kubepods-besteffort-pod0027f132_07f6_4a53_a31c_bc7af6c12abd.slice - libcontainer container kubepods-besteffort-pod0027f132_07f6_4a53_a31c_bc7af6c12abd.slice. Jan 20 13:57:39.956975 kubelet[3709]: I0120 13:57:39.956876 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk86t\" (UniqueName: \"kubernetes.io/projected/0027f132-07f6-4a53-a31c-bc7af6c12abd-kube-api-access-zk86t\") pod \"tigera-operator-7dcd859c48-78f6c\" (UID: \"0027f132-07f6-4a53-a31c-bc7af6c12abd\") " pod="tigera-operator/tigera-operator-7dcd859c48-78f6c" Jan 20 13:57:39.956975 kubelet[3709]: I0120 13:57:39.956942 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0027f132-07f6-4a53-a31c-bc7af6c12abd-var-lib-calico\") pod \"tigera-operator-7dcd859c48-78f6c\" (UID: \"0027f132-07f6-4a53-a31c-bc7af6c12abd\") " pod="tigera-operator/tigera-operator-7dcd859c48-78f6c" Jan 20 13:57:40.234459 containerd[2143]: time="2026-01-20T13:57:40.234265447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-78f6c,Uid:0027f132-07f6-4a53-a31c-bc7af6c12abd,Namespace:tigera-operator,Attempt:0,}" Jan 20 13:57:40.279809 containerd[2143]: time="2026-01-20T13:57:40.279768821Z" level=info msg="connecting to shim 82a9956a6d67d3be51224fb59098b64b46bffc419e1b04e03bfd76a214d9f1a0" address="unix:///run/containerd/s/83563c4cf6bcb0d1f29714558ef24dbf1df6fb5829e9b1ff177fc3c7425f08b0" namespace=k8s.io protocol=ttrpc version=3 Jan 20 13:57:40.298554 systemd[1]: Started cri-containerd-82a9956a6d67d3be51224fb59098b64b46bffc419e1b04e03bfd76a214d9f1a0.scope - libcontainer container 82a9956a6d67d3be51224fb59098b64b46bffc419e1b04e03bfd76a214d9f1a0. Jan 20 13:57:40.305000 audit: BPF prog-id=157 op=LOAD Jan 20 13:57:40.309045 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 20 13:57:40.309114 kernel: audit: type=1334 audit(1768917460.305:473): prog-id=157 op=LOAD Jan 20 13:57:40.313000 audit: BPF prog-id=158 op=LOAD Jan 20 13:57:40.318750 kernel: audit: type=1334 audit(1768917460.313:474): prog-id=158 op=LOAD Jan 20 13:57:40.313000 audit[3775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3764 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.335916 kernel: audit: type=1300 audit(1768917460.313:474): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3764 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613939353661366436376433626535313232346662353930393862 Jan 20 13:57:40.352693 kernel: audit: type=1327 audit(1768917460.313:474): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613939353661366436376433626535313232346662353930393862 Jan 20 13:57:40.313000 audit: BPF prog-id=158 op=UNLOAD Jan 20 13:57:40.357594 kernel: audit: type=1334 audit(1768917460.313:475): prog-id=158 op=UNLOAD Jan 20 13:57:40.313000 audit[3775]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3764 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.375867 kernel: audit: type=1300 audit(1768917460.313:475): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3764 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613939353661366436376433626535313232346662353930393862 Jan 20 13:57:40.393005 kernel: audit: type=1327 audit(1768917460.313:475): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613939353661366436376433626535313232346662353930393862 Jan 20 13:57:40.313000 audit: BPF prog-id=159 op=LOAD Jan 20 13:57:40.398078 kernel: audit: type=1334 audit(1768917460.313:476): prog-id=159 op=LOAD Jan 20 13:57:40.313000 audit[3775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3764 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.415646 kernel: audit: type=1300 audit(1768917460.313:476): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3764 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.432633 kernel: audit: type=1327 audit(1768917460.313:476): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613939353661366436376433626535313232346662353930393862 Jan 20 13:57:40.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613939353661366436376433626535313232346662353930393862 Jan 20 13:57:40.313000 audit: BPF prog-id=160 op=LOAD Jan 20 13:57:40.313000 audit[3775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3764 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613939353661366436376433626535313232346662353930393862 Jan 20 13:57:40.313000 audit: BPF prog-id=160 op=UNLOAD Jan 20 13:57:40.313000 audit[3775]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3764 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613939353661366436376433626535313232346662353930393862 Jan 20 13:57:40.313000 audit: BPF prog-id=159 op=UNLOAD Jan 20 13:57:40.313000 audit[3775]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3764 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613939353661366436376433626535313232346662353930393862 Jan 20 13:57:40.313000 audit: BPF prog-id=161 op=LOAD Jan 20 13:57:40.313000 audit[3775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3764 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613939353661366436376433626535313232346662353930393862 Jan 20 13:57:40.448807 containerd[2143]: time="2026-01-20T13:57:40.448756589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-78f6c,Uid:0027f132-07f6-4a53-a31c-bc7af6c12abd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"82a9956a6d67d3be51224fb59098b64b46bffc419e1b04e03bfd76a214d9f1a0\"" Jan 20 13:57:40.450616 containerd[2143]: time="2026-01-20T13:57:40.450580198Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 13:57:40.507313 containerd[2143]: time="2026-01-20T13:57:40.506815379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f4l5b,Uid:479516f9-6b17-4ab9-93a3-cd10dc04eaf3,Namespace:kube-system,Attempt:0,}" Jan 20 13:57:40.544924 containerd[2143]: time="2026-01-20T13:57:40.544823726Z" level=info msg="connecting to shim a2a5bf28bc5afbf67da727f1c400351e40a1b0c9489b6262684ea7093cb86d59" address="unix:///run/containerd/s/891e070adc5db04d69d1b90b35c2f79bd0c533a7c2a2cfc1e1caf445e0231f73" namespace=k8s.io protocol=ttrpc version=3 Jan 20 13:57:40.565573 systemd[1]: Started cri-containerd-a2a5bf28bc5afbf67da727f1c400351e40a1b0c9489b6262684ea7093cb86d59.scope - libcontainer container a2a5bf28bc5afbf67da727f1c400351e40a1b0c9489b6262684ea7093cb86d59. Jan 20 13:57:40.573000 audit: BPF prog-id=162 op=LOAD Jan 20 13:57:40.573000 audit: BPF prog-id=163 op=LOAD Jan 20 13:57:40.573000 audit[3822]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3811 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613562663238626335616662663637646137323766316334303033 Jan 20 13:57:40.573000 audit: BPF prog-id=163 op=UNLOAD Jan 20 13:57:40.573000 audit[3822]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3811 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613562663238626335616662663637646137323766316334303033 Jan 20 13:57:40.573000 audit: BPF prog-id=164 op=LOAD Jan 20 13:57:40.573000 audit[3822]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3811 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613562663238626335616662663637646137323766316334303033 Jan 20 13:57:40.573000 audit: BPF prog-id=165 op=LOAD Jan 20 13:57:40.573000 audit[3822]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3811 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613562663238626335616662663637646137323766316334303033 Jan 20 13:57:40.573000 audit: BPF prog-id=165 op=UNLOAD Jan 20 13:57:40.573000 audit[3822]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3811 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613562663238626335616662663637646137323766316334303033 Jan 20 13:57:40.573000 audit: BPF prog-id=164 op=UNLOAD Jan 20 13:57:40.573000 audit[3822]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3811 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613562663238626335616662663637646137323766316334303033 Jan 20 13:57:40.573000 audit: BPF prog-id=166 op=LOAD Jan 20 13:57:40.573000 audit[3822]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3811 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613562663238626335616662663637646137323766316334303033 Jan 20 13:57:40.587843 containerd[2143]: time="2026-01-20T13:57:40.587807109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f4l5b,Uid:479516f9-6b17-4ab9-93a3-cd10dc04eaf3,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2a5bf28bc5afbf67da727f1c400351e40a1b0c9489b6262684ea7093cb86d59\"" Jan 20 13:57:40.590662 containerd[2143]: time="2026-01-20T13:57:40.590193456Z" level=info msg="CreateContainer within sandbox \"a2a5bf28bc5afbf67da727f1c400351e40a1b0c9489b6262684ea7093cb86d59\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 13:57:40.632779 containerd[2143]: time="2026-01-20T13:57:40.632739425Z" level=info msg="Container 79400e9bf08fb7bc25d0e295fa2799f7dd7250471eea5905fb16a5e0f53e3a51: CDI devices from CRI Config.CDIDevices: []" Jan 20 13:57:40.651919 containerd[2143]: time="2026-01-20T13:57:40.651876335Z" level=info msg="CreateContainer within sandbox \"a2a5bf28bc5afbf67da727f1c400351e40a1b0c9489b6262684ea7093cb86d59\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"79400e9bf08fb7bc25d0e295fa2799f7dd7250471eea5905fb16a5e0f53e3a51\"" Jan 20 13:57:40.653990 containerd[2143]: time="2026-01-20T13:57:40.653297243Z" level=info msg="StartContainer for \"79400e9bf08fb7bc25d0e295fa2799f7dd7250471eea5905fb16a5e0f53e3a51\"" Jan 20 13:57:40.655097 containerd[2143]: time="2026-01-20T13:57:40.655070075Z" level=info msg="connecting to shim 79400e9bf08fb7bc25d0e295fa2799f7dd7250471eea5905fb16a5e0f53e3a51" address="unix:///run/containerd/s/891e070adc5db04d69d1b90b35c2f79bd0c533a7c2a2cfc1e1caf445e0231f73" protocol=ttrpc version=3 Jan 20 13:57:40.668553 systemd[1]: Started cri-containerd-79400e9bf08fb7bc25d0e295fa2799f7dd7250471eea5905fb16a5e0f53e3a51.scope - libcontainer container 79400e9bf08fb7bc25d0e295fa2799f7dd7250471eea5905fb16a5e0f53e3a51. Jan 20 13:57:40.707000 audit: BPF prog-id=167 op=LOAD Jan 20 13:57:40.707000 audit[3846]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3811 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343030653962663038666237626332356430653239356661323739 Jan 20 13:57:40.707000 audit: BPF prog-id=168 op=LOAD Jan 20 13:57:40.707000 audit[3846]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3811 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343030653962663038666237626332356430653239356661323739 Jan 20 13:57:40.707000 audit: BPF prog-id=168 op=UNLOAD Jan 20 13:57:40.707000 audit[3846]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3811 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343030653962663038666237626332356430653239356661323739 Jan 20 13:57:40.707000 audit: BPF prog-id=167 op=UNLOAD Jan 20 13:57:40.707000 audit[3846]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3811 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343030653962663038666237626332356430653239356661323739 Jan 20 13:57:40.707000 audit: BPF prog-id=169 op=LOAD Jan 20 13:57:40.707000 audit[3846]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3811 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343030653962663038666237626332356430653239356661323739 Jan 20 13:57:40.726733 containerd[2143]: time="2026-01-20T13:57:40.726686888Z" level=info msg="StartContainer for \"79400e9bf08fb7bc25d0e295fa2799f7dd7250471eea5905fb16a5e0f53e3a51\" returns successfully" Jan 20 13:57:40.832000 audit[3910]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3910 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:40.832000 audit[3910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff0365e60 a2=0 a3=1 items=0 ppid=3859 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.832000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 13:57:40.833000 audit[3909]: NETFILTER_CFG table=mangle:58 family=2 entries=1 op=nft_register_chain pid=3909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.833000 audit[3909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffee0777b0 a2=0 a3=1 items=0 ppid=3859 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.833000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 13:57:40.835000 audit[3912]: NETFILTER_CFG table=nat:59 family=10 entries=1 op=nft_register_chain pid=3912 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:40.835000 audit[3912]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa08dd30 a2=0 a3=1 items=0 ppid=3859 pid=3912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.835000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 13:57:40.836000 audit[3913]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_chain pid=3913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.836000 audit[3913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2fbfdb0 a2=0 a3=1 items=0 ppid=3859 pid=3913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.836000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 13:57:40.837000 audit[3914]: NETFILTER_CFG table=filter:61 family=10 entries=1 op=nft_register_chain pid=3914 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:40.837000 audit[3914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd9f9f550 a2=0 a3=1 items=0 ppid=3859 pid=3914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.837000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 13:57:40.838000 audit[3915]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_chain pid=3915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.838000 audit[3915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe4840510 a2=0 a3=1 items=0 ppid=3859 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.838000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 13:57:40.911277 kubelet[3709]: I0120 13:57:40.911041 3709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f4l5b" podStartSLOduration=1.911023432 podStartE2EDuration="1.911023432s" podCreationTimestamp="2026-01-20 13:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 13:57:40.901097242 +0000 UTC m=+7.134538433" watchObservedRunningTime="2026-01-20 13:57:40.911023432 +0000 UTC m=+7.144464631" Jan 20 13:57:40.933000 audit[3916]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.933000 audit[3916]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe424efb0 a2=0 a3=1 items=0 ppid=3859 pid=3916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.933000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 13:57:40.936000 audit[3918]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.936000 audit[3918]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd3a775d0 a2=0 a3=1 items=0 ppid=3859 pid=3918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.936000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 20 13:57:40.939000 audit[3921]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=3921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.939000 audit[3921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffa631b00 a2=0 a3=1 items=0 ppid=3859 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.939000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 20 13:57:40.940000 audit[3922]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=3922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.940000 audit[3922]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffea04b00 a2=0 a3=1 items=0 ppid=3859 pid=3922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.940000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 13:57:40.942000 audit[3924]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.942000 audit[3924]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff1a31c40 a2=0 a3=1 items=0 ppid=3859 pid=3924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.942000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 13:57:40.943000 audit[3925]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.943000 audit[3925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcf705100 a2=0 a3=1 items=0 ppid=3859 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.943000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 13:57:40.946000 audit[3927]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.946000 audit[3927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe0726c10 a2=0 a3=1 items=0 ppid=3859 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.946000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 13:57:40.948000 audit[3930]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=3930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.948000 audit[3930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcce3c830 a2=0 a3=1 items=0 ppid=3859 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.948000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 20 13:57:40.949000 audit[3931]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=3931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.949000 audit[3931]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff7cd2e60 a2=0 a3=1 items=0 ppid=3859 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.949000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 13:57:40.952000 audit[3933]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.952000 audit[3933]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff619d350 a2=0 a3=1 items=0 ppid=3859 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.952000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 13:57:40.953000 audit[3934]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=3934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.953000 audit[3934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeccbf7c0 a2=0 a3=1 items=0 ppid=3859 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.953000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 13:57:40.955000 audit[3936]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=3936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.955000 audit[3936]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeb827b00 a2=0 a3=1 items=0 ppid=3859 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.955000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 13:57:40.958000 audit[3939]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=3939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.958000 audit[3939]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc6788050 a2=0 a3=1 items=0 ppid=3859 pid=3939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.958000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 13:57:40.961000 audit[3942]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=3942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.961000 audit[3942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffec93d1d0 a2=0 a3=1 items=0 ppid=3859 pid=3942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.961000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 13:57:40.962000 audit[3943]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.962000 audit[3943]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffbc43a30 a2=0 a3=1 items=0 ppid=3859 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.962000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 13:57:40.964000 audit[3945]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.964000 audit[3945]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffcc093320 a2=0 a3=1 items=0 ppid=3859 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.964000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 13:57:40.968000 audit[3948]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=3948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.968000 audit[3948]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff143f340 a2=0 a3=1 items=0 ppid=3859 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.968000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 13:57:40.969000 audit[3949]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=3949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.969000 audit[3949]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc5c96ff0 a2=0 a3=1 items=0 ppid=3859 pid=3949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.969000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 13:57:40.971000 audit[3951]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=3951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 13:57:40.971000 audit[3951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffded08160 a2=0 a3=1 items=0 ppid=3859 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:40.971000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 13:57:41.044000 audit[3957]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=3957 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:57:41.044000 audit[3957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd6f495d0 a2=0 a3=1 items=0 ppid=3859 pid=3957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.044000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:57:41.054000 audit[3957]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=3957 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:57:41.054000 audit[3957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffd6f495d0 a2=0 a3=1 items=0 ppid=3859 pid=3957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.054000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:57:41.056000 audit[3962]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.056000 audit[3962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd4c6aef0 a2=0 a3=1 items=0 ppid=3859 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.056000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 13:57:41.059000 audit[3964]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=3964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.059000 audit[3964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffefc922a0 a2=0 a3=1 items=0 ppid=3859 pid=3964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.059000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 20 13:57:41.062000 audit[3967]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3967 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.062000 audit[3967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe2dceea0 a2=0 a3=1 items=0 ppid=3859 pid=3967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.062000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 20 13:57:41.063000 audit[3968]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.063000 audit[3968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdbcb4110 a2=0 a3=1 items=0 ppid=3859 pid=3968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.063000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 13:57:41.068000 audit[3970]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.068000 audit[3970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffab1ecb0 a2=0 a3=1 items=0 ppid=3859 pid=3970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.068000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 13:57:41.070000 audit[3971]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3971 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.070000 audit[3971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe22d9d00 a2=0 a3=1 items=0 ppid=3859 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.070000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 13:57:41.072000 audit[3973]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.072000 audit[3973]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff166b300 a2=0 a3=1 items=0 ppid=3859 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.072000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 20 13:57:41.075000 audit[3976]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=3976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.075000 audit[3976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffcebf0820 a2=0 a3=1 items=0 ppid=3859 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.075000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 13:57:41.076000 audit[3977]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=3977 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.076000 audit[3977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1585c90 a2=0 a3=1 items=0 ppid=3859 pid=3977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.076000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 13:57:41.078000 audit[3979]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3979 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.078000 audit[3979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc54d0710 a2=0 a3=1 items=0 ppid=3859 pid=3979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.078000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 13:57:41.080000 audit[3980]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=3980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.080000 audit[3980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffc636530 a2=0 a3=1 items=0 ppid=3859 pid=3980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.080000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 13:57:41.082000 audit[3982]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=3982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.082000 audit[3982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc948e020 a2=0 a3=1 items=0 ppid=3859 pid=3982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.082000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 13:57:41.086000 audit[3985]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=3985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.086000 audit[3985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeb15b4b0 a2=0 a3=1 items=0 ppid=3859 pid=3985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.086000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 13:57:41.089000 audit[3988]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=3988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.089000 audit[3988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe52691f0 a2=0 a3=1 items=0 ppid=3859 pid=3988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.089000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 20 13:57:41.090000 audit[3989]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.090000 audit[3989]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff4f3a7c0 a2=0 a3=1 items=0 ppid=3859 pid=3989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.090000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 13:57:41.092000 audit[3991]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=3991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.092000 audit[3991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffd4a86eb0 a2=0 a3=1 items=0 ppid=3859 pid=3991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.092000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 13:57:41.095000 audit[3994]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=3994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.095000 audit[3994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffedc23c60 a2=0 a3=1 items=0 ppid=3859 pid=3994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.095000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 13:57:41.096000 audit[3995]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=3995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.096000 audit[3995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff8494410 a2=0 a3=1 items=0 ppid=3859 pid=3995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.096000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 13:57:41.098000 audit[3997]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=3997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.098000 audit[3997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffcf2bf450 a2=0 a3=1 items=0 ppid=3859 pid=3997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.098000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 13:57:41.100000 audit[3998]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=3998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.100000 audit[3998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff178a1f0 a2=0 a3=1 items=0 ppid=3859 pid=3998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.100000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 13:57:41.102000 audit[4000]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=4000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.102000 audit[4000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff7ee2450 a2=0 a3=1 items=0 ppid=3859 pid=4000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.102000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 13:57:41.105000 audit[4003]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=4003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 13:57:41.105000 audit[4003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc8ae8820 a2=0 a3=1 items=0 ppid=3859 pid=4003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.105000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 13:57:41.107000 audit[4005]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=4005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 13:57:41.107000 audit[4005]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=fffff4300710 a2=0 a3=1 items=0 ppid=3859 pid=4005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.107000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:57:41.108000 audit[4005]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=4005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 13:57:41.108000 audit[4005]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=fffff4300710 a2=0 a3=1 items=0 ppid=3859 pid=4005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:41.108000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:57:42.525094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount91506914.mount: Deactivated successfully. Jan 20 13:57:44.979883 containerd[2143]: time="2026-01-20T13:57:44.979361064Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:57:44.981945 containerd[2143]: time="2026-01-20T13:57:44.981904871Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20776073" Jan 20 13:57:44.984931 containerd[2143]: time="2026-01-20T13:57:44.984901509Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:57:44.988415 containerd[2143]: time="2026-01-20T13:57:44.988382562Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:57:44.988723 containerd[2143]: time="2026-01-20T13:57:44.988694075Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 4.53807586s" Jan 20 13:57:44.988723 containerd[2143]: time="2026-01-20T13:57:44.988725012Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 20 13:57:44.994392 containerd[2143]: time="2026-01-20T13:57:44.991535132Z" level=info msg="CreateContainer within sandbox \"82a9956a6d67d3be51224fb59098b64b46bffc419e1b04e03bfd76a214d9f1a0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 13:57:45.010551 containerd[2143]: time="2026-01-20T13:57:45.010517261Z" level=info msg="Container 96c52bf0357071fbba7835a00e3cd1b3c9744f11a6e6e8d0e1532b15923feffc: CDI devices from CRI Config.CDIDevices: []" Jan 20 13:57:45.011479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2194813061.mount: Deactivated successfully. Jan 20 13:57:45.024424 containerd[2143]: time="2026-01-20T13:57:45.024369702Z" level=info msg="CreateContainer within sandbox \"82a9956a6d67d3be51224fb59098b64b46bffc419e1b04e03bfd76a214d9f1a0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"96c52bf0357071fbba7835a00e3cd1b3c9744f11a6e6e8d0e1532b15923feffc\"" Jan 20 13:57:45.024838 containerd[2143]: time="2026-01-20T13:57:45.024816036Z" level=info msg="StartContainer for \"96c52bf0357071fbba7835a00e3cd1b3c9744f11a6e6e8d0e1532b15923feffc\"" Jan 20 13:57:45.025852 containerd[2143]: time="2026-01-20T13:57:45.025659966Z" level=info msg="connecting to shim 96c52bf0357071fbba7835a00e3cd1b3c9744f11a6e6e8d0e1532b15923feffc" address="unix:///run/containerd/s/83563c4cf6bcb0d1f29714558ef24dbf1df6fb5829e9b1ff177fc3c7425f08b0" protocol=ttrpc version=3 Jan 20 13:57:45.049574 systemd[1]: Started cri-containerd-96c52bf0357071fbba7835a00e3cd1b3c9744f11a6e6e8d0e1532b15923feffc.scope - libcontainer container 96c52bf0357071fbba7835a00e3cd1b3c9744f11a6e6e8d0e1532b15923feffc. Jan 20 13:57:45.057000 audit: BPF prog-id=170 op=LOAD Jan 20 13:57:45.057000 audit: BPF prog-id=171 op=LOAD Jan 20 13:57:45.057000 audit[4017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3764 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:45.057000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936633532626630333537303731666262613738333561303065336364 Jan 20 13:57:45.058000 audit: BPF prog-id=171 op=UNLOAD Jan 20 13:57:45.058000 audit[4017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3764 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:45.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936633532626630333537303731666262613738333561303065336364 Jan 20 13:57:45.058000 audit: BPF prog-id=172 op=LOAD Jan 20 13:57:45.058000 audit[4017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3764 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:45.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936633532626630333537303731666262613738333561303065336364 Jan 20 13:57:45.058000 audit: BPF prog-id=173 op=LOAD Jan 20 13:57:45.058000 audit[4017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3764 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:45.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936633532626630333537303731666262613738333561303065336364 Jan 20 13:57:45.058000 audit: BPF prog-id=173 op=UNLOAD Jan 20 13:57:45.058000 audit[4017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3764 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:45.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936633532626630333537303731666262613738333561303065336364 Jan 20 13:57:45.058000 audit: BPF prog-id=172 op=UNLOAD Jan 20 13:57:45.058000 audit[4017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3764 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:45.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936633532626630333537303731666262613738333561303065336364 Jan 20 13:57:45.058000 audit: BPF prog-id=174 op=LOAD Jan 20 13:57:45.058000 audit[4017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3764 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:45.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936633532626630333537303731666262613738333561303065336364 Jan 20 13:57:45.077580 containerd[2143]: time="2026-01-20T13:57:45.077008867Z" level=info msg="StartContainer for \"96c52bf0357071fbba7835a00e3cd1b3c9744f11a6e6e8d0e1532b15923feffc\" returns successfully" Jan 20 13:57:45.903092 kubelet[3709]: I0120 13:57:45.902726 3709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-78f6c" podStartSLOduration=2.362992209 podStartE2EDuration="6.902712273s" podCreationTimestamp="2026-01-20 13:57:39 +0000 UTC" firstStartedPulling="2026-01-20 13:57:40.450120295 +0000 UTC m=+6.683561486" lastFinishedPulling="2026-01-20 13:57:44.989840351 +0000 UTC m=+11.223281550" observedRunningTime="2026-01-20 13:57:45.902515835 +0000 UTC m=+12.135957026" watchObservedRunningTime="2026-01-20 13:57:45.902712273 +0000 UTC m=+12.136153464" Jan 20 13:57:50.098814 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 20 13:57:50.098982 kernel: audit: type=1106 audit(1768917470.077:553): pid=2641 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 13:57:50.077000 audit[2641]: USER_END pid=2641 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 13:57:50.078846 sudo[2641]: pam_unix(sudo:session): session closed for user root Jan 20 13:57:50.114994 kernel: audit: type=1104 audit(1768917470.077:554): pid=2641 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 13:57:50.077000 audit[2641]: CRED_DISP pid=2641 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 13:57:50.174648 sshd[2640]: Connection closed by 10.200.16.10 port 44540 Jan 20 13:57:50.175205 sshd-session[2636]: pam_unix(sshd:session): session closed for user core Jan 20 13:57:50.175000 audit[2636]: USER_END pid=2636 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:57:50.178381 systemd[1]: sshd@6-10.200.20.32:22-10.200.16.10:44540.service: Deactivated successfully. Jan 20 13:57:50.194621 systemd-logind[2119]: Session 10 logged out. Waiting for processes to exit. Jan 20 13:57:50.198997 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 13:57:50.199210 systemd[1]: session-10.scope: Consumed 2.606s CPU time, 206.5M memory peak. Jan 20 13:57:50.201372 systemd-logind[2119]: Removed session 10. Jan 20 13:57:50.218127 kernel: audit: type=1106 audit(1768917470.175:555): pid=2636 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:57:50.218213 kernel: audit: type=1104 audit(1768917470.175:556): pid=2636 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:57:50.175000 audit[2636]: CRED_DISP pid=2636 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:57:50.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.32:22-10.200.16.10:44540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:57:50.234872 kernel: audit: type=1131 audit(1768917470.176:557): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.32:22-10.200.16.10:44540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:57:50.860000 audit[4094]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4094 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:57:50.889422 kernel: audit: type=1325 audit(1768917470.860:558): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4094 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:57:50.889531 kernel: audit: type=1300 audit(1768917470.860:558): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff638bdd0 a2=0 a3=1 items=0 ppid=3859 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:50.860000 audit[4094]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff638bdd0 a2=0 a3=1 items=0 ppid=3859 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:50.860000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:57:50.901937 kernel: audit: type=1327 audit(1768917470.860:558): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:57:50.891000 audit[4094]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4094 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:57:50.912266 kernel: audit: type=1325 audit(1768917470.891:559): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4094 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:57:50.891000 audit[4094]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff638bdd0 a2=0 a3=1 items=0 ppid=3859 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:50.932491 kernel: audit: type=1300 audit(1768917470.891:559): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff638bdd0 a2=0 a3=1 items=0 ppid=3859 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:50.891000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:57:50.935000 audit[4096]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4096 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:57:50.935000 audit[4096]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd1d4ed80 a2=0 a3=1 items=0 ppid=3859 pid=4096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:50.935000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:57:50.938000 audit[4096]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4096 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:57:50.938000 audit[4096]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd1d4ed80 a2=0 a3=1 items=0 ppid=3859 pid=4096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:50.938000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:57:54.158000 audit[4098]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4098 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:57:54.158000 audit[4098]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd8c41b30 a2=0 a3=1 items=0 ppid=3859 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:54.158000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:57:54.165000 audit[4098]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4098 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:57:54.165000 audit[4098]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd8c41b30 a2=0 a3=1 items=0 ppid=3859 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:54.165000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:57:54.179000 audit[4100]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4100 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:57:54.179000 audit[4100]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd474b140 a2=0 a3=1 items=0 ppid=3859 pid=4100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:54.179000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:57:54.185000 audit[4100]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4100 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:57:54.185000 audit[4100]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd474b140 a2=0 a3=1 items=0 ppid=3859 pid=4100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:54.185000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:57:55.194000 audit[4103]: NETFILTER_CFG table=filter:116 family=2 entries=19 op=nft_register_rule pid=4103 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:57:55.199084 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 20 13:57:55.199129 kernel: audit: type=1325 audit(1768917475.194:566): table=filter:116 family=2 entries=19 op=nft_register_rule pid=4103 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:57:55.194000 audit[4103]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffff181390 a2=0 a3=1 items=0 ppid=3859 pid=4103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:55.194000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:57:55.240098 kernel: audit: type=1300 audit(1768917475.194:566): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffff181390 a2=0 a3=1 items=0 ppid=3859 pid=4103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:55.240228 kernel: audit: type=1327 audit(1768917475.194:566): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:57:55.229000 audit[4103]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4103 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:57:55.251368 kernel: audit: type=1325 audit(1768917475.229:567): table=nat:117 family=2 entries=12 op=nft_register_rule pid=4103 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:57:55.229000 audit[4103]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffff181390 a2=0 a3=1 items=0 ppid=3859 pid=4103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:55.270052 kernel: audit: type=1300 audit(1768917475.229:567): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffff181390 a2=0 a3=1 items=0 ppid=3859 pid=4103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:55.229000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:57:55.282875 kernel: audit: type=1327 audit(1768917475.229:567): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:57:55.896318 systemd[1]: Created slice kubepods-besteffort-pod316f516f_4df8_4e57_aa7c_70d5bdd8cff1.slice - libcontainer container kubepods-besteffort-pod316f516f_4df8_4e57_aa7c_70d5bdd8cff1.slice. Jan 20 13:57:56.049892 kubelet[3709]: I0120 13:57:56.049839 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/316f516f-4df8-4e57-aa7c-70d5bdd8cff1-typha-certs\") pod \"calico-typha-6fcd78cfb5-tgw48\" (UID: \"316f516f-4df8-4e57-aa7c-70d5bdd8cff1\") " pod="calico-system/calico-typha-6fcd78cfb5-tgw48" Jan 20 13:57:56.049892 kubelet[3709]: I0120 13:57:56.049890 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/316f516f-4df8-4e57-aa7c-70d5bdd8cff1-tigera-ca-bundle\") pod \"calico-typha-6fcd78cfb5-tgw48\" (UID: \"316f516f-4df8-4e57-aa7c-70d5bdd8cff1\") " pod="calico-system/calico-typha-6fcd78cfb5-tgw48" Jan 20 13:57:56.049892 kubelet[3709]: I0120 13:57:56.049908 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr692\" (UniqueName: \"kubernetes.io/projected/316f516f-4df8-4e57-aa7c-70d5bdd8cff1-kube-api-access-xr692\") pod \"calico-typha-6fcd78cfb5-tgw48\" (UID: \"316f516f-4df8-4e57-aa7c-70d5bdd8cff1\") " pod="calico-system/calico-typha-6fcd78cfb5-tgw48" Jan 20 13:57:56.075618 systemd[1]: Created slice kubepods-besteffort-pod0adfb75c_f965_4ede_ae88_5a0fb78a17b5.slice - libcontainer container kubepods-besteffort-pod0adfb75c_f965_4ede_ae88_5a0fb78a17b5.slice. Jan 20 13:57:56.152559 kubelet[3709]: I0120 13:57:56.150596 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0adfb75c-f965-4ede-ae88-5a0fb78a17b5-flexvol-driver-host\") pod \"calico-node-4s4s6\" (UID: \"0adfb75c-f965-4ede-ae88-5a0fb78a17b5\") " pod="calico-system/calico-node-4s4s6" Jan 20 13:57:56.152559 kubelet[3709]: I0120 13:57:56.150639 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0adfb75c-f965-4ede-ae88-5a0fb78a17b5-var-lib-calico\") pod \"calico-node-4s4s6\" (UID: \"0adfb75c-f965-4ede-ae88-5a0fb78a17b5\") " pod="calico-system/calico-node-4s4s6" Jan 20 13:57:56.152559 kubelet[3709]: I0120 13:57:56.150661 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0adfb75c-f965-4ede-ae88-5a0fb78a17b5-lib-modules\") pod \"calico-node-4s4s6\" (UID: \"0adfb75c-f965-4ede-ae88-5a0fb78a17b5\") " pod="calico-system/calico-node-4s4s6" Jan 20 13:57:56.152559 kubelet[3709]: I0120 13:57:56.150673 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0adfb75c-f965-4ede-ae88-5a0fb78a17b5-policysync\") pod \"calico-node-4s4s6\" (UID: \"0adfb75c-f965-4ede-ae88-5a0fb78a17b5\") " pod="calico-system/calico-node-4s4s6" Jan 20 13:57:56.152559 kubelet[3709]: I0120 13:57:56.150684 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj9xn\" (UniqueName: \"kubernetes.io/projected/0adfb75c-f965-4ede-ae88-5a0fb78a17b5-kube-api-access-tj9xn\") pod \"calico-node-4s4s6\" (UID: \"0adfb75c-f965-4ede-ae88-5a0fb78a17b5\") " pod="calico-system/calico-node-4s4s6" Jan 20 13:57:56.152752 kubelet[3709]: I0120 13:57:56.150701 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0adfb75c-f965-4ede-ae88-5a0fb78a17b5-cni-bin-dir\") pod \"calico-node-4s4s6\" (UID: \"0adfb75c-f965-4ede-ae88-5a0fb78a17b5\") " pod="calico-system/calico-node-4s4s6" Jan 20 13:57:56.152752 kubelet[3709]: I0120 13:57:56.150713 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0adfb75c-f965-4ede-ae88-5a0fb78a17b5-xtables-lock\") pod \"calico-node-4s4s6\" (UID: \"0adfb75c-f965-4ede-ae88-5a0fb78a17b5\") " pod="calico-system/calico-node-4s4s6" Jan 20 13:57:56.152752 kubelet[3709]: I0120 13:57:56.150730 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0adfb75c-f965-4ede-ae88-5a0fb78a17b5-cni-net-dir\") pod \"calico-node-4s4s6\" (UID: \"0adfb75c-f965-4ede-ae88-5a0fb78a17b5\") " pod="calico-system/calico-node-4s4s6" Jan 20 13:57:56.152752 kubelet[3709]: I0120 13:57:56.150739 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0adfb75c-f965-4ede-ae88-5a0fb78a17b5-node-certs\") pod \"calico-node-4s4s6\" (UID: \"0adfb75c-f965-4ede-ae88-5a0fb78a17b5\") " pod="calico-system/calico-node-4s4s6" Jan 20 13:57:56.152752 kubelet[3709]: I0120 13:57:56.150751 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0adfb75c-f965-4ede-ae88-5a0fb78a17b5-tigera-ca-bundle\") pod \"calico-node-4s4s6\" (UID: \"0adfb75c-f965-4ede-ae88-5a0fb78a17b5\") " pod="calico-system/calico-node-4s4s6" Jan 20 13:57:56.152831 kubelet[3709]: I0120 13:57:56.150760 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0adfb75c-f965-4ede-ae88-5a0fb78a17b5-var-run-calico\") pod \"calico-node-4s4s6\" (UID: \"0adfb75c-f965-4ede-ae88-5a0fb78a17b5\") " pod="calico-system/calico-node-4s4s6" Jan 20 13:57:56.152831 kubelet[3709]: I0120 13:57:56.150771 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0adfb75c-f965-4ede-ae88-5a0fb78a17b5-cni-log-dir\") pod \"calico-node-4s4s6\" (UID: \"0adfb75c-f965-4ede-ae88-5a0fb78a17b5\") " pod="calico-system/calico-node-4s4s6" Jan 20 13:57:56.201933 containerd[2143]: time="2026-01-20T13:57:56.201887238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fcd78cfb5-tgw48,Uid:316f516f-4df8-4e57-aa7c-70d5bdd8cff1,Namespace:calico-system,Attempt:0,}" Jan 20 13:57:56.256968 kubelet[3709]: E0120 13:57:56.256928 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.257292 kubelet[3709]: W0120 13:57:56.257213 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.257292 kubelet[3709]: E0120 13:57:56.257240 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.261242 containerd[2143]: time="2026-01-20T13:57:56.259688416Z" level=info msg="connecting to shim 4580084ae5f6b01879fe0c693ad423fb518d46e6f350fc2ca8e776383b296790" address="unix:///run/containerd/s/afa4ac6cd509dde29f0802e9f08675eedcfe40f9108db5ebf76ecd46e45371ef" namespace=k8s.io protocol=ttrpc version=3 Jan 20 13:57:56.261864 kubelet[3709]: E0120 13:57:56.261793 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.261864 kubelet[3709]: W0120 13:57:56.261824 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.261864 kubelet[3709]: E0120 13:57:56.261839 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.276024 kubelet[3709]: E0120 13:57:56.275977 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhqxx" podUID="23535579-9237-4da3-a34f-0ccbbd6a2ee0" Jan 20 13:57:56.282000 audit[4132]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4132 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:57:56.289230 kubelet[3709]: I0120 13:57:56.289185 3709 status_manager.go:890] "Failed to get status for pod" podUID="23535579-9237-4da3-a34f-0ccbbd6a2ee0" pod="calico-system/csi-node-driver-vhqxx" err="pods \"csi-node-driver-vhqxx\" is forbidden: User \"system:node:ci-9999.1.1-f-6b32856eb5\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-9999.1.1-f-6b32856eb5' and this object" Jan 20 13:57:56.294826 kubelet[3709]: E0120 13:57:56.294703 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.294826 kubelet[3709]: W0120 13:57:56.294726 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.294826 kubelet[3709]: E0120 13:57:56.294747 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.296414 kernel: audit: type=1325 audit(1768917476.282:568): table=filter:118 family=2 entries=21 op=nft_register_rule pid=4132 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:57:56.282000 audit[4132]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffcd05f180 a2=0 a3=1 items=0 ppid=3859 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:56.282000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:57:56.327741 kernel: audit: type=1300 audit(1768917476.282:568): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffcd05f180 a2=0 a3=1 items=0 ppid=3859 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:56.327850 kernel: audit: type=1327 audit(1768917476.282:568): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:57:56.330529 kernel: audit: type=1325 audit(1768917476.295:569): table=nat:119 family=2 entries=12 op=nft_register_rule pid=4132 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:57:56.295000 audit[4132]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4132 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:57:56.295000 audit[4132]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcd05f180 a2=0 a3=1 items=0 ppid=3859 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:56.295000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:57:56.348763 systemd[1]: Started cri-containerd-4580084ae5f6b01879fe0c693ad423fb518d46e6f350fc2ca8e776383b296790.scope - libcontainer container 4580084ae5f6b01879fe0c693ad423fb518d46e6f350fc2ca8e776383b296790. Jan 20 13:57:56.353215 kubelet[3709]: E0120 13:57:56.353195 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.353369 kubelet[3709]: W0120 13:57:56.353354 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.353602 kubelet[3709]: E0120 13:57:56.353422 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.353971 kubelet[3709]: E0120 13:57:56.353930 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.354200 kubelet[3709]: W0120 13:57:56.353943 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.354200 kubelet[3709]: E0120 13:57:56.354153 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.354532 kubelet[3709]: E0120 13:57:56.354500 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.354760 kubelet[3709]: W0120 13:57:56.354574 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.354760 kubelet[3709]: E0120 13:57:56.354588 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.355397 kubelet[3709]: E0120 13:57:56.355253 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.355397 kubelet[3709]: W0120 13:57:56.355298 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.355397 kubelet[3709]: E0120 13:57:56.355312 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.355969 kubelet[3709]: E0120 13:57:56.355835 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.355969 kubelet[3709]: W0120 13:57:56.355849 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.355969 kubelet[3709]: E0120 13:57:56.355860 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.356469 kubelet[3709]: E0120 13:57:56.356454 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.356846 kubelet[3709]: W0120 13:57:56.356742 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.356846 kubelet[3709]: E0120 13:57:56.356764 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.357346 kubelet[3709]: E0120 13:57:56.357066 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.357559 kubelet[3709]: W0120 13:57:56.357446 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.357559 kubelet[3709]: E0120 13:57:56.357465 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.357726 kubelet[3709]: E0120 13:57:56.357716 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.357793 kubelet[3709]: W0120 13:57:56.357783 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.357852 kubelet[3709]: E0120 13:57:56.357842 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.358083 kubelet[3709]: E0120 13:57:56.358040 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.358083 kubelet[3709]: W0120 13:57:56.358050 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.358083 kubelet[3709]: E0120 13:57:56.358058 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.358293 kubelet[3709]: E0120 13:57:56.358282 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.358406 kubelet[3709]: W0120 13:57:56.358352 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.358406 kubelet[3709]: E0120 13:57:56.358365 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.358600 kubelet[3709]: E0120 13:57:56.358591 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.358692 kubelet[3709]: W0120 13:57:56.358654 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.358692 kubelet[3709]: E0120 13:57:56.358669 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.358866 kubelet[3709]: E0120 13:57:56.358857 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.358987 kubelet[3709]: W0120 13:57:56.358911 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.358987 kubelet[3709]: E0120 13:57:56.358925 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.359428 kubelet[3709]: E0120 13:57:56.359185 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.359428 kubelet[3709]: W0120 13:57:56.359204 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.359428 kubelet[3709]: E0120 13:57:56.359214 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.359793 kubelet[3709]: E0120 13:57:56.359723 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.360061 kubelet[3709]: W0120 13:57:56.359831 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.360061 kubelet[3709]: E0120 13:57:56.359845 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.360401 kubelet[3709]: E0120 13:57:56.360346 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.360659 kubelet[3709]: W0120 13:57:56.360544 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.360659 kubelet[3709]: E0120 13:57:56.360563 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.361063 kubelet[3709]: E0120 13:57:56.361003 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.361354 kubelet[3709]: W0120 13:57:56.361169 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.361354 kubelet[3709]: E0120 13:57:56.361187 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.361829 kubelet[3709]: E0120 13:57:56.361659 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.361829 kubelet[3709]: W0120 13:57:56.361671 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.361829 kubelet[3709]: E0120 13:57:56.361680 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.362197 kubelet[3709]: E0120 13:57:56.362092 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.362197 kubelet[3709]: W0120 13:57:56.362103 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.362197 kubelet[3709]: E0120 13:57:56.362113 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.362654 kubelet[3709]: E0120 13:57:56.362554 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.362654 kubelet[3709]: W0120 13:57:56.362589 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.362654 kubelet[3709]: E0120 13:57:56.362600 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.363029 kubelet[3709]: E0120 13:57:56.362987 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.363029 kubelet[3709]: W0120 13:57:56.362998 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.363204 kubelet[3709]: E0120 13:57:56.363007 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.367000 audit: BPF prog-id=175 op=LOAD Jan 20 13:57:56.367000 audit: BPF prog-id=176 op=LOAD Jan 20 13:57:56.367000 audit[4130]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=4117 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:56.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435383030383461653566366230313837396665306336393361643432 Jan 20 13:57:56.367000 audit: BPF prog-id=176 op=UNLOAD Jan 20 13:57:56.367000 audit[4130]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4117 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:56.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435383030383461653566366230313837396665306336393361643432 Jan 20 13:57:56.367000 audit: BPF prog-id=177 op=LOAD Jan 20 13:57:56.367000 audit[4130]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=4117 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:56.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435383030383461653566366230313837396665306336393361643432 Jan 20 13:57:56.367000 audit: BPF prog-id=178 op=LOAD Jan 20 13:57:56.367000 audit[4130]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=4117 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:56.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435383030383461653566366230313837396665306336393361643432 Jan 20 13:57:56.368000 audit: BPF prog-id=178 op=UNLOAD Jan 20 13:57:56.368000 audit[4130]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4117 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:56.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435383030383461653566366230313837396665306336393361643432 Jan 20 13:57:56.368000 audit: BPF prog-id=177 op=UNLOAD Jan 20 13:57:56.368000 audit[4130]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4117 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:56.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435383030383461653566366230313837396665306336393361643432 Jan 20 13:57:56.368000 audit: BPF prog-id=179 op=LOAD Jan 20 13:57:56.368000 audit[4130]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=4117 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:56.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435383030383461653566366230313837396665306336393361643432 Jan 20 13:57:56.380028 containerd[2143]: time="2026-01-20T13:57:56.379811982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4s4s6,Uid:0adfb75c-f965-4ede-ae88-5a0fb78a17b5,Namespace:calico-system,Attempt:0,}" Jan 20 13:57:56.412344 containerd[2143]: time="2026-01-20T13:57:56.412235190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fcd78cfb5-tgw48,Uid:316f516f-4df8-4e57-aa7c-70d5bdd8cff1,Namespace:calico-system,Attempt:0,} returns sandbox id \"4580084ae5f6b01879fe0c693ad423fb518d46e6f350fc2ca8e776383b296790\"" Jan 20 13:57:56.415658 containerd[2143]: time="2026-01-20T13:57:56.415624661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 13:57:56.438044 containerd[2143]: time="2026-01-20T13:57:56.437771069Z" level=info msg="connecting to shim 11628a46ea0ffa07fa72e9a8678988b411a114869fd7fddb52686e61979b03e9" address="unix:///run/containerd/s/d1a48f0d8c9aaf4096faacbee1d3ed51db0ac27a6b6985d1c5c41c4bb5f9ffc9" namespace=k8s.io protocol=ttrpc version=3 Jan 20 13:57:56.452709 kubelet[3709]: E0120 13:57:56.452686 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.452823 kubelet[3709]: W0120 13:57:56.452811 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.452910 kubelet[3709]: E0120 13:57:56.452900 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.452985 kubelet[3709]: I0120 13:57:56.452975 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/23535579-9237-4da3-a34f-0ccbbd6a2ee0-socket-dir\") pod \"csi-node-driver-vhqxx\" (UID: \"23535579-9237-4da3-a34f-0ccbbd6a2ee0\") " pod="calico-system/csi-node-driver-vhqxx" Jan 20 13:57:56.453217 kubelet[3709]: E0120 13:57:56.453197 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.453217 kubelet[3709]: W0120 13:57:56.453213 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.453349 kubelet[3709]: E0120 13:57:56.453232 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.453537 kubelet[3709]: E0120 13:57:56.453521 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.453537 kubelet[3709]: W0120 13:57:56.453535 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.453661 systemd[1]: Started cri-containerd-11628a46ea0ffa07fa72e9a8678988b411a114869fd7fddb52686e61979b03e9.scope - libcontainer container 11628a46ea0ffa07fa72e9a8678988b411a114869fd7fddb52686e61979b03e9. Jan 20 13:57:56.454250 kubelet[3709]: E0120 13:57:56.453938 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.454399 kubelet[3709]: E0120 13:57:56.454335 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.454399 kubelet[3709]: W0120 13:57:56.454350 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.454585 kubelet[3709]: E0120 13:57:56.454376 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.454585 kubelet[3709]: I0120 13:57:56.454452 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szm87\" (UniqueName: \"kubernetes.io/projected/23535579-9237-4da3-a34f-0ccbbd6a2ee0-kube-api-access-szm87\") pod \"csi-node-driver-vhqxx\" (UID: \"23535579-9237-4da3-a34f-0ccbbd6a2ee0\") " pod="calico-system/csi-node-driver-vhqxx" Jan 20 13:57:56.454774 kubelet[3709]: E0120 13:57:56.454761 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.455287 kubelet[3709]: W0120 13:57:56.455174 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.455287 kubelet[3709]: E0120 13:57:56.455200 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.455494 kubelet[3709]: E0120 13:57:56.455481 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.455642 kubelet[3709]: W0120 13:57:56.455571 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.455642 kubelet[3709]: E0120 13:57:56.455594 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.456015 kubelet[3709]: E0120 13:57:56.456003 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.456257 kubelet[3709]: W0120 13:57:56.456154 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.456257 kubelet[3709]: E0120 13:57:56.456170 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.456257 kubelet[3709]: I0120 13:57:56.456193 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/23535579-9237-4da3-a34f-0ccbbd6a2ee0-varrun\") pod \"csi-node-driver-vhqxx\" (UID: \"23535579-9237-4da3-a34f-0ccbbd6a2ee0\") " pod="calico-system/csi-node-driver-vhqxx" Jan 20 13:57:56.457426 kubelet[3709]: E0120 13:57:56.457288 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.457426 kubelet[3709]: W0120 13:57:56.457312 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.457426 kubelet[3709]: E0120 13:57:56.457331 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.457426 kubelet[3709]: I0120 13:57:56.457350 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23535579-9237-4da3-a34f-0ccbbd6a2ee0-kubelet-dir\") pod \"csi-node-driver-vhqxx\" (UID: \"23535579-9237-4da3-a34f-0ccbbd6a2ee0\") " pod="calico-system/csi-node-driver-vhqxx" Jan 20 13:57:56.457673 kubelet[3709]: E0120 13:57:56.457646 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.457673 kubelet[3709]: W0120 13:57:56.457666 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.457800 kubelet[3709]: E0120 13:57:56.457681 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.457962 kubelet[3709]: E0120 13:57:56.457944 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.457962 kubelet[3709]: W0120 13:57:56.457959 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.458041 kubelet[3709]: E0120 13:57:56.457987 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.458608 kubelet[3709]: E0120 13:57:56.458574 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.458608 kubelet[3709]: W0120 13:57:56.458593 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.458608 kubelet[3709]: E0120 13:57:56.458608 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.458702 kubelet[3709]: I0120 13:57:56.458623 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/23535579-9237-4da3-a34f-0ccbbd6a2ee0-registration-dir\") pod \"csi-node-driver-vhqxx\" (UID: \"23535579-9237-4da3-a34f-0ccbbd6a2ee0\") " pod="calico-system/csi-node-driver-vhqxx" Jan 20 13:57:56.459031 kubelet[3709]: E0120 13:57:56.459007 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.459031 kubelet[3709]: W0120 13:57:56.459027 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.459146 kubelet[3709]: E0120 13:57:56.459099 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.459635 kubelet[3709]: E0120 13:57:56.459612 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.459635 kubelet[3709]: W0120 13:57:56.459630 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.459635 kubelet[3709]: E0120 13:57:56.459645 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.460065 kubelet[3709]: E0120 13:57:56.460045 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.460065 kubelet[3709]: W0120 13:57:56.460065 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.460232 kubelet[3709]: E0120 13:57:56.460077 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.460473 kubelet[3709]: E0120 13:57:56.460456 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.460473 kubelet[3709]: W0120 13:57:56.460470 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.460601 kubelet[3709]: E0120 13:57:56.460481 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.465000 audit: BPF prog-id=180 op=LOAD Jan 20 13:57:56.467000 audit: BPF prog-id=181 op=LOAD Jan 20 13:57:56.467000 audit[4204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4194 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:56.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131363238613436656130666661303766613732653961383637383938 Jan 20 13:57:56.467000 audit: BPF prog-id=181 op=UNLOAD Jan 20 13:57:56.467000 audit[4204]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4194 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:56.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131363238613436656130666661303766613732653961383637383938 Jan 20 13:57:56.467000 audit: BPF prog-id=182 op=LOAD Jan 20 13:57:56.467000 audit[4204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4194 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:56.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131363238613436656130666661303766613732653961383637383938 Jan 20 13:57:56.467000 audit: BPF prog-id=183 op=LOAD Jan 20 13:57:56.467000 audit[4204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4194 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:56.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131363238613436656130666661303766613732653961383637383938 Jan 20 13:57:56.468000 audit: BPF prog-id=183 op=UNLOAD Jan 20 13:57:56.468000 audit[4204]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4194 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:56.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131363238613436656130666661303766613732653961383637383938 Jan 20 13:57:56.468000 audit: BPF prog-id=182 op=UNLOAD Jan 20 13:57:56.468000 audit[4204]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4194 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:56.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131363238613436656130666661303766613732653961383637383938 Jan 20 13:57:56.468000 audit: BPF prog-id=184 op=LOAD Jan 20 13:57:56.468000 audit[4204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4194 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:56.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131363238613436656130666661303766613732653961383637383938 Jan 20 13:57:56.484443 containerd[2143]: time="2026-01-20T13:57:56.484325649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4s4s6,Uid:0adfb75c-f965-4ede-ae88-5a0fb78a17b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"11628a46ea0ffa07fa72e9a8678988b411a114869fd7fddb52686e61979b03e9\"" Jan 20 13:57:56.560064 kubelet[3709]: E0120 13:57:56.559990 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.560064 kubelet[3709]: W0120 13:57:56.560015 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.560064 kubelet[3709]: E0120 13:57:56.560035 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.560629 kubelet[3709]: E0120 13:57:56.560215 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.560629 kubelet[3709]: W0120 13:57:56.560248 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.560629 kubelet[3709]: E0120 13:57:56.560257 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.561014 kubelet[3709]: E0120 13:57:56.560888 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.561014 kubelet[3709]: W0120 13:57:56.560905 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.561014 kubelet[3709]: E0120 13:57:56.560922 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.561335 kubelet[3709]: E0120 13:57:56.561311 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.561463 kubelet[3709]: W0120 13:57:56.561403 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.561463 kubelet[3709]: E0120 13:57:56.561426 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.561732 kubelet[3709]: E0120 13:57:56.561720 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.561838 kubelet[3709]: W0120 13:57:56.561779 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.561838 kubelet[3709]: E0120 13:57:56.561825 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.562116 kubelet[3709]: E0120 13:57:56.562061 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.562116 kubelet[3709]: W0120 13:57:56.562071 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.562116 kubelet[3709]: E0120 13:57:56.562104 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.562636 kubelet[3709]: E0120 13:57:56.562536 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.562636 kubelet[3709]: W0120 13:57:56.562568 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.562851 kubelet[3709]: E0120 13:57:56.562648 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.563491 kubelet[3709]: E0120 13:57:56.563215 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.563491 kubelet[3709]: W0120 13:57:56.563228 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.563491 kubelet[3709]: E0120 13:57:56.563249 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.563491 kubelet[3709]: E0120 13:57:56.563484 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.563491 kubelet[3709]: W0120 13:57:56.563495 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.563619 kubelet[3709]: E0120 13:57:56.563507 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.563794 kubelet[3709]: E0120 13:57:56.563634 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.563794 kubelet[3709]: W0120 13:57:56.563646 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.563794 kubelet[3709]: E0120 13:57:56.563653 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.563794 kubelet[3709]: E0120 13:57:56.563782 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.563794 kubelet[3709]: W0120 13:57:56.563788 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.563794 kubelet[3709]: E0120 13:57:56.563794 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.563915 kubelet[3709]: E0120 13:57:56.563877 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.563915 kubelet[3709]: W0120 13:57:56.563882 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.563915 kubelet[3709]: E0120 13:57:56.563887 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.564192 kubelet[3709]: E0120 13:57:56.563957 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.564192 kubelet[3709]: W0120 13:57:56.563966 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.564192 kubelet[3709]: E0120 13:57:56.563971 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.564549 kubelet[3709]: E0120 13:57:56.564359 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.564549 kubelet[3709]: W0120 13:57:56.564369 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.564549 kubelet[3709]: E0120 13:57:56.564417 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.565574 kubelet[3709]: E0120 13:57:56.565543 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.565574 kubelet[3709]: W0120 13:57:56.565558 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.565808 kubelet[3709]: E0120 13:57:56.565785 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.565934 kubelet[3709]: E0120 13:57:56.565914 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.565934 kubelet[3709]: W0120 13:57:56.565924 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.566143 kubelet[3709]: E0120 13:57:56.566125 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.566539 kubelet[3709]: E0120 13:57:56.566436 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.566685 kubelet[3709]: W0120 13:57:56.566616 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.566892 kubelet[3709]: E0120 13:57:56.566852 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.567191 kubelet[3709]: E0120 13:57:56.567166 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.567191 kubelet[3709]: W0120 13:57:56.567178 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.567370 kubelet[3709]: E0120 13:57:56.567235 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.567583 kubelet[3709]: E0120 13:57:56.567559 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.567583 kubelet[3709]: W0120 13:57:56.567571 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.567729 kubelet[3709]: E0120 13:57:56.567715 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.567976 kubelet[3709]: E0120 13:57:56.567952 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.568074 kubelet[3709]: W0120 13:57:56.568024 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.568255 kubelet[3709]: E0120 13:57:56.568222 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.568945 kubelet[3709]: E0120 13:57:56.568848 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.568945 kubelet[3709]: W0120 13:57:56.568862 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.569199 kubelet[3709]: E0120 13:57:56.569105 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.569199 kubelet[3709]: W0120 13:57:56.569116 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.569520 kubelet[3709]: E0120 13:57:56.569507 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.569580 kubelet[3709]: W0120 13:57:56.569570 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.569620 kubelet[3709]: E0120 13:57:56.569609 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.569672 kubelet[3709]: E0120 13:57:56.569663 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.569955 kubelet[3709]: E0120 13:57:56.569852 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.569955 kubelet[3709]: W0120 13:57:56.569863 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.569955 kubelet[3709]: E0120 13:57:56.569872 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.570133 kubelet[3709]: E0120 13:57:56.570123 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.570182 kubelet[3709]: W0120 13:57:56.570173 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.570224 kubelet[3709]: E0120 13:57:56.570215 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.570268 kubelet[3709]: E0120 13:57:56.570260 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:56.571143 kubelet[3709]: E0120 13:57:56.571099 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:56.571143 kubelet[3709]: W0120 13:57:56.571111 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:56.571143 kubelet[3709]: E0120 13:57:56.571121 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:57.831303 kubelet[3709]: E0120 13:57:57.831261 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhqxx" podUID="23535579-9237-4da3-a34f-0ccbbd6a2ee0" Jan 20 13:57:57.898582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3163117364.mount: Deactivated successfully. Jan 20 13:57:58.750357 containerd[2143]: time="2026-01-20T13:57:58.750214142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:57:58.754082 containerd[2143]: time="2026-01-20T13:57:58.753998801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Jan 20 13:57:58.756627 containerd[2143]: time="2026-01-20T13:57:58.756579535Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:57:58.761419 containerd[2143]: time="2026-01-20T13:57:58.761105553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:57:58.761561 containerd[2143]: time="2026-01-20T13:57:58.761521277Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.345862528s" Jan 20 13:57:58.761600 containerd[2143]: time="2026-01-20T13:57:58.761557839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 20 13:57:58.763504 containerd[2143]: time="2026-01-20T13:57:58.763422135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 13:57:58.776128 containerd[2143]: time="2026-01-20T13:57:58.776042918Z" level=info msg="CreateContainer within sandbox \"4580084ae5f6b01879fe0c693ad423fb518d46e6f350fc2ca8e776383b296790\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 13:57:58.799714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount851904079.mount: Deactivated successfully. Jan 20 13:57:58.800845 containerd[2143]: time="2026-01-20T13:57:58.800805526Z" level=info msg="Container fad1fc09737e8e4fe57baece0ce1ee68be1a5cb1ffc732fd546717e8926c2b15: CDI devices from CRI Config.CDIDevices: []" Jan 20 13:57:58.818326 containerd[2143]: time="2026-01-20T13:57:58.818281504Z" level=info msg="CreateContainer within sandbox \"4580084ae5f6b01879fe0c693ad423fb518d46e6f350fc2ca8e776383b296790\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fad1fc09737e8e4fe57baece0ce1ee68be1a5cb1ffc732fd546717e8926c2b15\"" Jan 20 13:57:58.819409 containerd[2143]: time="2026-01-20T13:57:58.819343584Z" level=info msg="StartContainer for \"fad1fc09737e8e4fe57baece0ce1ee68be1a5cb1ffc732fd546717e8926c2b15\"" Jan 20 13:57:58.820980 containerd[2143]: time="2026-01-20T13:57:58.820951993Z" level=info msg="connecting to shim fad1fc09737e8e4fe57baece0ce1ee68be1a5cb1ffc732fd546717e8926c2b15" address="unix:///run/containerd/s/afa4ac6cd509dde29f0802e9f08675eedcfe40f9108db5ebf76ecd46e45371ef" protocol=ttrpc version=3 Jan 20 13:57:58.842532 systemd[1]: Started cri-containerd-fad1fc09737e8e4fe57baece0ce1ee68be1a5cb1ffc732fd546717e8926c2b15.scope - libcontainer container fad1fc09737e8e4fe57baece0ce1ee68be1a5cb1ffc732fd546717e8926c2b15. Jan 20 13:57:58.851000 audit: BPF prog-id=185 op=LOAD Jan 20 13:57:58.851000 audit: BPF prog-id=186 op=LOAD Jan 20 13:57:58.851000 audit[4283]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=4117 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:58.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661643166633039373337653865346665353762616563653063653165 Jan 20 13:57:58.851000 audit: BPF prog-id=186 op=UNLOAD Jan 20 13:57:58.851000 audit[4283]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4117 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:58.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661643166633039373337653865346665353762616563653063653165 Jan 20 13:57:58.852000 audit: BPF prog-id=187 op=LOAD Jan 20 13:57:58.852000 audit[4283]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=4117 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:58.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661643166633039373337653865346665353762616563653063653165 Jan 20 13:57:58.852000 audit: BPF prog-id=188 op=LOAD Jan 20 13:57:58.852000 audit[4283]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=4117 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:58.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661643166633039373337653865346665353762616563653063653165 Jan 20 13:57:58.852000 audit: BPF prog-id=188 op=UNLOAD Jan 20 13:57:58.852000 audit[4283]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4117 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:58.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661643166633039373337653865346665353762616563653063653165 Jan 20 13:57:58.852000 audit: BPF prog-id=187 op=UNLOAD Jan 20 13:57:58.852000 audit[4283]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4117 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:58.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661643166633039373337653865346665353762616563653063653165 Jan 20 13:57:58.852000 audit: BPF prog-id=189 op=LOAD Jan 20 13:57:58.852000 audit[4283]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=4117 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:57:58.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661643166633039373337653865346665353762616563653063653165 Jan 20 13:57:58.886242 containerd[2143]: time="2026-01-20T13:57:58.886105858Z" level=info msg="StartContainer for \"fad1fc09737e8e4fe57baece0ce1ee68be1a5cb1ffc732fd546717e8926c2b15\" returns successfully" Jan 20 13:57:58.953886 kubelet[3709]: I0120 13:57:58.953727 3709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6fcd78cfb5-tgw48" podStartSLOduration=1.605782575 podStartE2EDuration="3.953701342s" podCreationTimestamp="2026-01-20 13:57:55 +0000 UTC" firstStartedPulling="2026-01-20 13:57:56.415221768 +0000 UTC m=+22.648662959" lastFinishedPulling="2026-01-20 13:57:58.763140535 +0000 UTC m=+24.996581726" observedRunningTime="2026-01-20 13:57:58.940761885 +0000 UTC m=+25.174203076" watchObservedRunningTime="2026-01-20 13:57:58.953701342 +0000 UTC m=+25.187142533" Jan 20 13:57:58.975583 kubelet[3709]: E0120 13:57:58.975409 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.975583 kubelet[3709]: W0120 13:57:58.975434 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.975583 kubelet[3709]: E0120 13:57:58.975454 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.977458 kubelet[3709]: E0120 13:57:58.977357 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.977693 kubelet[3709]: W0120 13:57:58.977374 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.977693 kubelet[3709]: E0120 13:57:58.977630 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.978634 kubelet[3709]: E0120 13:57:58.978618 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.978838 kubelet[3709]: W0120 13:57:58.978760 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.978838 kubelet[3709]: E0120 13:57:58.978781 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.979085 kubelet[3709]: E0120 13:57:58.979071 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.979159 kubelet[3709]: W0120 13:57:58.979137 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.979217 kubelet[3709]: E0120 13:57:58.979205 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.979675 kubelet[3709]: E0120 13:57:58.979642 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.979675 kubelet[3709]: W0120 13:57:58.979655 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.980042 kubelet[3709]: E0120 13:57:58.979763 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.980365 kubelet[3709]: E0120 13:57:58.980300 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.980365 kubelet[3709]: W0120 13:57:58.980312 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.980365 kubelet[3709]: E0120 13:57:58.980323 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.981042 kubelet[3709]: E0120 13:57:58.980945 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.981042 kubelet[3709]: W0120 13:57:58.980964 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.981042 kubelet[3709]: E0120 13:57:58.980995 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.983050 kubelet[3709]: E0120 13:57:58.983034 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.983151 kubelet[3709]: W0120 13:57:58.983119 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.983151 kubelet[3709]: E0120 13:57:58.983136 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.983468 kubelet[3709]: E0120 13:57:58.983415 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.983468 kubelet[3709]: W0120 13:57:58.983426 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.983468 kubelet[3709]: E0120 13:57:58.983436 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.983854 kubelet[3709]: E0120 13:57:58.983765 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.983854 kubelet[3709]: W0120 13:57:58.983779 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.983854 kubelet[3709]: E0120 13:57:58.983792 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.984468 kubelet[3709]: E0120 13:57:58.984404 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.984468 kubelet[3709]: W0120 13:57:58.984417 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.984468 kubelet[3709]: E0120 13:57:58.984427 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.984788 kubelet[3709]: E0120 13:57:58.984754 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.984788 kubelet[3709]: W0120 13:57:58.984765 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.984788 kubelet[3709]: E0120 13:57:58.984774 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.985320 kubelet[3709]: E0120 13:57:58.985265 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.985320 kubelet[3709]: W0120 13:57:58.985277 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.985320 kubelet[3709]: E0120 13:57:58.985287 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.985952 kubelet[3709]: E0120 13:57:58.985871 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.985952 kubelet[3709]: W0120 13:57:58.985884 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.985952 kubelet[3709]: E0120 13:57:58.985915 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.986588 kubelet[3709]: E0120 13:57:58.986547 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.986588 kubelet[3709]: W0120 13:57:58.986559 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.986588 kubelet[3709]: E0120 13:57:58.986570 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.986944 kubelet[3709]: E0120 13:57:58.986911 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.986944 kubelet[3709]: W0120 13:57:58.986921 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.986944 kubelet[3709]: E0120 13:57:58.986930 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.987503 kubelet[3709]: E0120 13:57:58.987477 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.987503 kubelet[3709]: W0120 13:57:58.987490 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.987671 kubelet[3709]: E0120 13:57:58.987599 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.988276 kubelet[3709]: E0120 13:57:58.987857 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.988370 kubelet[3709]: W0120 13:57:58.988349 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.988499 kubelet[3709]: E0120 13:57:58.988445 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.988697 kubelet[3709]: E0120 13:57:58.988677 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.988697 kubelet[3709]: W0120 13:57:58.988686 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.988873 kubelet[3709]: E0120 13:57:58.988846 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.989027 kubelet[3709]: E0120 13:57:58.989005 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.989027 kubelet[3709]: W0120 13:57:58.989015 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.989176 kubelet[3709]: E0120 13:57:58.989165 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.990587 kubelet[3709]: E0120 13:57:58.990552 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.990587 kubelet[3709]: W0120 13:57:58.990572 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.990768 kubelet[3709]: E0120 13:57:58.990755 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.990933 kubelet[3709]: E0120 13:57:58.990910 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.990933 kubelet[3709]: W0120 13:57:58.990922 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.991089 kubelet[3709]: E0120 13:57:58.991071 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.991233 kubelet[3709]: E0120 13:57:58.991211 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.991233 kubelet[3709]: W0120 13:57:58.991221 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.991411 kubelet[3709]: E0120 13:57:58.991372 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.991578 kubelet[3709]: E0120 13:57:58.991558 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.991578 kubelet[3709]: W0120 13:57:58.991568 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.991723 kubelet[3709]: E0120 13:57:58.991667 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.991878 kubelet[3709]: E0120 13:57:58.991858 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.991878 kubelet[3709]: W0120 13:57:58.991868 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.992007 kubelet[3709]: E0120 13:57:58.991954 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.992193 kubelet[3709]: E0120 13:57:58.992173 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.992193 kubelet[3709]: W0120 13:57:58.992183 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.992324 kubelet[3709]: E0120 13:57:58.992267 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.992727 kubelet[3709]: E0120 13:57:58.992634 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.992727 kubelet[3709]: W0120 13:57:58.992645 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.992727 kubelet[3709]: E0120 13:57:58.992656 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.992871 kubelet[3709]: E0120 13:57:58.992861 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.993526 kubelet[3709]: W0120 13:57:58.992922 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.993821 kubelet[3709]: E0120 13:57:58.993688 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.994006 kubelet[3709]: E0120 13:57:58.993995 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.994081 kubelet[3709]: W0120 13:57:58.994069 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.994212 kubelet[3709]: E0120 13:57:58.994190 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.994377 kubelet[3709]: E0120 13:57:58.994365 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.994472 kubelet[3709]: W0120 13:57:58.994451 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.994548 kubelet[3709]: E0120 13:57:58.994536 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.994767 kubelet[3709]: E0120 13:57:58.994746 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.994767 kubelet[3709]: W0120 13:57:58.994756 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.995145 kubelet[3709]: E0120 13:57:58.994844 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.995695 kubelet[3709]: E0120 13:57:58.995430 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.995695 kubelet[3709]: W0120 13:57:58.995443 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.995695 kubelet[3709]: E0120 13:57:58.995453 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:58.996266 kubelet[3709]: E0120 13:57:58.996252 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:58.996355 kubelet[3709]: W0120 13:57:58.996342 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:58.996426 kubelet[3709]: E0120 13:57:58.996416 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:59.832388 kubelet[3709]: E0120 13:57:59.832327 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhqxx" podUID="23535579-9237-4da3-a34f-0ccbbd6a2ee0" Jan 20 13:57:59.924487 kubelet[3709]: I0120 13:57:59.924455 3709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 13:57:59.991761 kubelet[3709]: E0120 13:57:59.991720 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:59.991761 kubelet[3709]: W0120 13:57:59.991740 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:59.991761 kubelet[3709]: E0120 13:57:59.991758 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:59.992604 kubelet[3709]: E0120 13:57:59.992070 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:59.992604 kubelet[3709]: W0120 13:57:59.992076 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:59.992604 kubelet[3709]: E0120 13:57:59.992086 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:59.992604 kubelet[3709]: E0120 13:57:59.992484 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:59.992604 kubelet[3709]: W0120 13:57:59.992499 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:59.992604 kubelet[3709]: E0120 13:57:59.992509 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:59.992796 kubelet[3709]: E0120 13:57:59.992783 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:59.992796 kubelet[3709]: W0120 13:57:59.992791 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:59.993063 kubelet[3709]: E0120 13:57:59.992810 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:59.993268 kubelet[3709]: E0120 13:57:59.993239 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:59.993268 kubelet[3709]: W0120 13:57:59.993255 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:59.993454 kubelet[3709]: E0120 13:57:59.993354 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:59.993555 kubelet[3709]: E0120 13:57:59.993542 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:59.993729 kubelet[3709]: W0120 13:57:59.993588 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:59.993729 kubelet[3709]: E0120 13:57:59.993605 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:59.993828 kubelet[3709]: E0120 13:57:59.993816 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:59.993884 kubelet[3709]: W0120 13:57:59.993874 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:59.993953 kubelet[3709]: E0120 13:57:59.993931 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:59.994189 kubelet[3709]: E0120 13:57:59.994141 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:59.994189 kubelet[3709]: W0120 13:57:59.994153 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:59.994189 kubelet[3709]: E0120 13:57:59.994162 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:59.994631 kubelet[3709]: E0120 13:57:59.994618 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:59.994782 kubelet[3709]: W0120 13:57:59.994668 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:59.994782 kubelet[3709]: E0120 13:57:59.994681 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:59.994991 kubelet[3709]: E0120 13:57:59.994949 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:59.994991 kubelet[3709]: W0120 13:57:59.994962 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:59.994991 kubelet[3709]: E0120 13:57:59.994972 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:59.995309 kubelet[3709]: E0120 13:57:59.995210 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:59.995309 kubelet[3709]: W0120 13:57:59.995221 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:59.995309 kubelet[3709]: E0120 13:57:59.995235 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:59.995472 kubelet[3709]: E0120 13:57:59.995460 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:59.995549 kubelet[3709]: W0120 13:57:59.995539 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:59.995690 kubelet[3709]: E0120 13:57:59.995597 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:59.995838 kubelet[3709]: E0120 13:57:59.995828 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:59.995914 kubelet[3709]: W0120 13:57:59.995903 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:59.996164 kubelet[3709]: E0120 13:57:59.996097 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:59.996453 kubelet[3709]: E0120 13:57:59.996427 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:59.996616 kubelet[3709]: W0120 13:57:59.996558 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:59.996616 kubelet[3709]: E0120 13:57:59.996576 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:57:59.996933 kubelet[3709]: E0120 13:57:59.996858 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:57:59.996933 kubelet[3709]: W0120 13:57:59.996870 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:57:59.996933 kubelet[3709]: E0120 13:57:59.996880 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:58:00.095346 kubelet[3709]: E0120 13:58:00.095209 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:58:00.095346 kubelet[3709]: W0120 13:58:00.095228 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:58:00.095346 kubelet[3709]: E0120 13:58:00.095246 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:58:00.096043 kubelet[3709]: E0120 13:58:00.095911 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:58:00.096043 kubelet[3709]: W0120 13:58:00.095927 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:58:00.096043 kubelet[3709]: E0120 13:58:00.095940 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:58:00.096410 kubelet[3709]: E0120 13:58:00.096272 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:58:00.096410 kubelet[3709]: W0120 13:58:00.096283 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:58:00.096410 kubelet[3709]: E0120 13:58:00.096298 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:58:00.097008 kubelet[3709]: E0120 13:58:00.096781 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:58:00.097008 kubelet[3709]: W0120 13:58:00.096794 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:58:00.097008 kubelet[3709]: E0120 13:58:00.096810 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:58:00.097008 kubelet[3709]: E0120 13:58:00.096940 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:58:00.097008 kubelet[3709]: W0120 13:58:00.096946 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:58:00.097008 kubelet[3709]: E0120 13:58:00.096957 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:58:00.097239 kubelet[3709]: E0120 13:58:00.097228 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:58:00.097342 kubelet[3709]: W0120 13:58:00.097294 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:58:00.097342 kubelet[3709]: E0120 13:58:00.097320 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:58:00.097573 kubelet[3709]: E0120 13:58:00.097545 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:58:00.097573 kubelet[3709]: W0120 13:58:00.097566 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:58:00.097573 kubelet[3709]: E0120 13:58:00.097581 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:58:00.097981 kubelet[3709]: E0120 13:58:00.097958 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:58:00.097981 kubelet[3709]: W0120 13:58:00.097977 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:58:00.097981 kubelet[3709]: E0120 13:58:00.098008 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:58:00.098281 kubelet[3709]: E0120 13:58:00.098265 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:58:00.098281 kubelet[3709]: W0120 13:58:00.098279 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:58:00.098551 kubelet[3709]: E0120 13:58:00.098304 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:58:00.098813 kubelet[3709]: E0120 13:58:00.098790 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:58:00.098813 kubelet[3709]: W0120 13:58:00.098809 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:58:00.098813 kubelet[3709]: E0120 13:58:00.098823 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:58:00.099213 kubelet[3709]: E0120 13:58:00.099093 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:58:00.099213 kubelet[3709]: W0120 13:58:00.099105 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:58:00.099213 kubelet[3709]: E0120 13:58:00.099126 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:58:00.099496 kubelet[3709]: E0120 13:58:00.099471 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:58:00.099639 kubelet[3709]: W0120 13:58:00.099484 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:58:00.099639 kubelet[3709]: E0120 13:58:00.099592 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:58:00.099878 kubelet[3709]: E0120 13:58:00.099866 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:58:00.100115 kubelet[3709]: W0120 13:58:00.099938 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:58:00.100246 kubelet[3709]: E0120 13:58:00.100222 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:58:00.100368 kubelet[3709]: E0120 13:58:00.100354 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:58:00.100569 kubelet[3709]: W0120 13:58:00.100409 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:58:00.100569 kubelet[3709]: E0120 13:58:00.100431 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:58:00.100951 kubelet[3709]: E0120 13:58:00.100848 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:58:00.100951 kubelet[3709]: W0120 13:58:00.100861 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:58:00.100951 kubelet[3709]: E0120 13:58:00.100876 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:58:00.101232 kubelet[3709]: E0120 13:58:00.101216 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:58:00.101429 kubelet[3709]: W0120 13:58:00.101293 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:58:00.101429 kubelet[3709]: E0120 13:58:00.101312 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:58:00.101683 kubelet[3709]: E0120 13:58:00.101669 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:58:00.101991 kubelet[3709]: W0120 13:58:00.101753 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:58:00.101991 kubelet[3709]: E0120 13:58:00.101769 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:58:00.102348 kubelet[3709]: E0120 13:58:00.102293 3709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 13:58:00.102348 kubelet[3709]: W0120 13:58:00.102309 3709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 13:58:00.102348 kubelet[3709]: E0120 13:58:00.102323 3709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 13:58:00.136034 containerd[2143]: time="2026-01-20T13:58:00.135540136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:58:00.139429 containerd[2143]: time="2026-01-20T13:58:00.139356524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 20 13:58:00.142223 containerd[2143]: time="2026-01-20T13:58:00.142092879Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:58:00.145736 containerd[2143]: time="2026-01-20T13:58:00.145701628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:58:00.146137 containerd[2143]: time="2026-01-20T13:58:00.145976796Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.382527364s" Jan 20 13:58:00.146137 containerd[2143]: time="2026-01-20T13:58:00.146007245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 20 13:58:00.149048 containerd[2143]: time="2026-01-20T13:58:00.149021737Z" level=info msg="CreateContainer within sandbox \"11628a46ea0ffa07fa72e9a8678988b411a114869fd7fddb52686e61979b03e9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 13:58:00.170153 containerd[2143]: time="2026-01-20T13:58:00.168510280Z" level=info msg="Container cd6239060ecfc8957a67d9b46232a6f697ce24474bdf5cff7c2ba4bab1e832b5: CDI devices from CRI Config.CDIDevices: []" Jan 20 13:58:00.185324 containerd[2143]: time="2026-01-20T13:58:00.185280829Z" level=info msg="CreateContainer within sandbox \"11628a46ea0ffa07fa72e9a8678988b411a114869fd7fddb52686e61979b03e9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cd6239060ecfc8957a67d9b46232a6f697ce24474bdf5cff7c2ba4bab1e832b5\"" Jan 20 13:58:00.185809 containerd[2143]: time="2026-01-20T13:58:00.185783084Z" level=info msg="StartContainer for \"cd6239060ecfc8957a67d9b46232a6f697ce24474bdf5cff7c2ba4bab1e832b5\"" Jan 20 13:58:00.187465 containerd[2143]: time="2026-01-20T13:58:00.187434623Z" level=info msg="connecting to shim cd6239060ecfc8957a67d9b46232a6f697ce24474bdf5cff7c2ba4bab1e832b5" address="unix:///run/containerd/s/d1a48f0d8c9aaf4096faacbee1d3ed51db0ac27a6b6985d1c5c41c4bb5f9ffc9" protocol=ttrpc version=3 Jan 20 13:58:00.207747 systemd[1]: Started cri-containerd-cd6239060ecfc8957a67d9b46232a6f697ce24474bdf5cff7c2ba4bab1e832b5.scope - libcontainer container cd6239060ecfc8957a67d9b46232a6f697ce24474bdf5cff7c2ba4bab1e832b5. Jan 20 13:58:00.258417 kernel: kauditd_printk_skb: 68 callbacks suppressed Jan 20 13:58:00.258487 kernel: audit: type=1334 audit(1768917480.252:594): prog-id=190 op=LOAD Jan 20 13:58:00.252000 audit: BPF prog-id=190 op=LOAD Jan 20 13:58:00.277976 kernel: audit: type=1300 audit(1768917480.252:594): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4194 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:00.252000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4194 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:00.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364363233393036306563666338393537613637643962343632333261 Jan 20 13:58:00.294644 kernel: audit: type=1327 audit(1768917480.252:594): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364363233393036306563666338393537613637643962343632333261 Jan 20 13:58:00.252000 audit: BPF prog-id=191 op=LOAD Jan 20 13:58:00.301017 kernel: audit: type=1334 audit(1768917480.252:595): prog-id=191 op=LOAD Jan 20 13:58:00.252000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4194 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:00.317324 kernel: audit: type=1300 audit(1768917480.252:595): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4194 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:00.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364363233393036306563666338393537613637643962343632333261 Jan 20 13:58:00.335970 kernel: audit: type=1327 audit(1768917480.252:595): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364363233393036306563666338393537613637643962343632333261 Jan 20 13:58:00.255000 audit: BPF prog-id=191 op=UNLOAD Jan 20 13:58:00.341940 kernel: audit: type=1334 audit(1768917480.255:596): prog-id=191 op=UNLOAD Jan 20 13:58:00.255000 audit[4395]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4194 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:00.357666 kernel: audit: type=1300 audit(1768917480.255:596): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4194 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:00.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364363233393036306563666338393537613637643962343632333261 Jan 20 13:58:00.374500 kernel: audit: type=1327 audit(1768917480.255:596): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364363233393036306563666338393537613637643962343632333261 Jan 20 13:58:00.255000 audit: BPF prog-id=190 op=UNLOAD Jan 20 13:58:00.381775 kernel: audit: type=1334 audit(1768917480.255:597): prog-id=190 op=UNLOAD Jan 20 13:58:00.255000 audit[4395]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4194 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:00.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364363233393036306563666338393537613637643962343632333261 Jan 20 13:58:00.255000 audit: BPF prog-id=192 op=LOAD Jan 20 13:58:00.255000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4194 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:00.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364363233393036306563666338393537613637643962343632333261 Jan 20 13:58:00.389413 containerd[2143]: time="2026-01-20T13:58:00.389233243Z" level=info msg="StartContainer for \"cd6239060ecfc8957a67d9b46232a6f697ce24474bdf5cff7c2ba4bab1e832b5\" returns successfully" Jan 20 13:58:00.396732 systemd[1]: cri-containerd-cd6239060ecfc8957a67d9b46232a6f697ce24474bdf5cff7c2ba4bab1e832b5.scope: Deactivated successfully. Jan 20 13:58:00.398812 containerd[2143]: time="2026-01-20T13:58:00.398760876Z" level=info msg="received container exit event container_id:\"cd6239060ecfc8957a67d9b46232a6f697ce24474bdf5cff7c2ba4bab1e832b5\" id:\"cd6239060ecfc8957a67d9b46232a6f697ce24474bdf5cff7c2ba4bab1e832b5\" pid:4408 exited_at:{seconds:1768917480 nanos:398320718}" Jan 20 13:58:00.399000 audit: BPF prog-id=192 op=UNLOAD Jan 20 13:58:00.420639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd6239060ecfc8957a67d9b46232a6f697ce24474bdf5cff7c2ba4bab1e832b5-rootfs.mount: Deactivated successfully. Jan 20 13:58:01.831911 kubelet[3709]: E0120 13:58:01.831225 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhqxx" podUID="23535579-9237-4da3-a34f-0ccbbd6a2ee0" Jan 20 13:58:01.934986 containerd[2143]: time="2026-01-20T13:58:01.934395979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 13:58:03.831550 kubelet[3709]: E0120 13:58:03.831494 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhqxx" podUID="23535579-9237-4da3-a34f-0ccbbd6a2ee0" Jan 20 13:58:05.704056 containerd[2143]: time="2026-01-20T13:58:05.703999307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:58:05.707231 containerd[2143]: time="2026-01-20T13:58:05.707181854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 20 13:58:05.711204 containerd[2143]: time="2026-01-20T13:58:05.710789501Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:58:05.715025 containerd[2143]: time="2026-01-20T13:58:05.714985679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:58:05.715475 containerd[2143]: time="2026-01-20T13:58:05.715448141Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.781006481s" Jan 20 13:58:05.715510 containerd[2143]: time="2026-01-20T13:58:05.715478974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 20 13:58:05.718079 containerd[2143]: time="2026-01-20T13:58:05.718046837Z" level=info msg="CreateContainer within sandbox \"11628a46ea0ffa07fa72e9a8678988b411a114869fd7fddb52686e61979b03e9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 13:58:05.738505 containerd[2143]: time="2026-01-20T13:58:05.738267575Z" level=info msg="Container 083a2800a30c278f6ef8ea5c70a032187ebc01f0fbe3a16d2c0cddfe25105f42: CDI devices from CRI Config.CDIDevices: []" Jan 20 13:58:05.754409 containerd[2143]: time="2026-01-20T13:58:05.754285494Z" level=info msg="CreateContainer within sandbox \"11628a46ea0ffa07fa72e9a8678988b411a114869fd7fddb52686e61979b03e9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"083a2800a30c278f6ef8ea5c70a032187ebc01f0fbe3a16d2c0cddfe25105f42\"" Jan 20 13:58:05.756100 containerd[2143]: time="2026-01-20T13:58:05.754921210Z" level=info msg="StartContainer for \"083a2800a30c278f6ef8ea5c70a032187ebc01f0fbe3a16d2c0cddfe25105f42\"" Jan 20 13:58:05.756100 containerd[2143]: time="2026-01-20T13:58:05.756026228Z" level=info msg="connecting to shim 083a2800a30c278f6ef8ea5c70a032187ebc01f0fbe3a16d2c0cddfe25105f42" address="unix:///run/containerd/s/d1a48f0d8c9aaf4096faacbee1d3ed51db0ac27a6b6985d1c5c41c4bb5f9ffc9" protocol=ttrpc version=3 Jan 20 13:58:05.776554 systemd[1]: Started cri-containerd-083a2800a30c278f6ef8ea5c70a032187ebc01f0fbe3a16d2c0cddfe25105f42.scope - libcontainer container 083a2800a30c278f6ef8ea5c70a032187ebc01f0fbe3a16d2c0cddfe25105f42. Jan 20 13:58:05.823000 audit: BPF prog-id=193 op=LOAD Jan 20 13:58:05.832763 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 20 13:58:05.832854 kernel: audit: type=1334 audit(1768917485.823:600): prog-id=193 op=LOAD Jan 20 13:58:05.832887 kubelet[3709]: E0120 13:58:05.831095 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhqxx" podUID="23535579-9237-4da3-a34f-0ccbbd6a2ee0" Jan 20 13:58:05.823000 audit[4453]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4194 pid=4453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:05.850130 kernel: audit: type=1300 audit(1768917485.823:600): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4194 pid=4453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:05.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038336132383030613330633237386636656638656135633730613033 Jan 20 13:58:05.867281 kernel: audit: type=1327 audit(1768917485.823:600): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038336132383030613330633237386636656638656135633730613033 Jan 20 13:58:05.823000 audit: BPF prog-id=194 op=LOAD Jan 20 13:58:05.871827 kernel: audit: type=1334 audit(1768917485.823:601): prog-id=194 op=LOAD Jan 20 13:58:05.823000 audit[4453]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4194 pid=4453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:05.888843 kernel: audit: type=1300 audit(1768917485.823:601): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4194 pid=4453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:05.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038336132383030613330633237386636656638656135633730613033 Jan 20 13:58:05.905778 kernel: audit: type=1327 audit(1768917485.823:601): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038336132383030613330633237386636656638656135633730613033 Jan 20 13:58:05.823000 audit: BPF prog-id=194 op=UNLOAD Jan 20 13:58:05.912096 kernel: audit: type=1334 audit(1768917485.823:602): prog-id=194 op=UNLOAD Jan 20 13:58:05.823000 audit[4453]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4194 pid=4453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:05.928480 kernel: audit: type=1300 audit(1768917485.823:602): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4194 pid=4453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:05.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038336132383030613330633237386636656638656135633730613033 Jan 20 13:58:05.940684 containerd[2143]: time="2026-01-20T13:58:05.940608903Z" level=info msg="StartContainer for \"083a2800a30c278f6ef8ea5c70a032187ebc01f0fbe3a16d2c0cddfe25105f42\" returns successfully" Jan 20 13:58:05.944975 kernel: audit: type=1327 audit(1768917485.823:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038336132383030613330633237386636656638656135633730613033 Jan 20 13:58:05.823000 audit: BPF prog-id=193 op=UNLOAD Jan 20 13:58:05.949365 kernel: audit: type=1334 audit(1768917485.823:603): prog-id=193 op=UNLOAD Jan 20 13:58:05.823000 audit[4453]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4194 pid=4453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:05.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038336132383030613330633237386636656638656135633730613033 Jan 20 13:58:05.823000 audit: BPF prog-id=195 op=LOAD Jan 20 13:58:05.823000 audit[4453]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4194 pid=4453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:05.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038336132383030613330633237386636656638656135633730613033 Jan 20 13:58:07.459839 containerd[2143]: time="2026-01-20T13:58:07.459781412Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 13:58:07.462160 systemd[1]: cri-containerd-083a2800a30c278f6ef8ea5c70a032187ebc01f0fbe3a16d2c0cddfe25105f42.scope: Deactivated successfully. Jan 20 13:58:07.462712 systemd[1]: cri-containerd-083a2800a30c278f6ef8ea5c70a032187ebc01f0fbe3a16d2c0cddfe25105f42.scope: Consumed 335ms CPU time, 189.6M memory peak, 165.9M written to disk. Jan 20 13:58:07.466000 audit: BPF prog-id=195 op=UNLOAD Jan 20 13:58:07.466993 containerd[2143]: time="2026-01-20T13:58:07.466954882Z" level=info msg="received container exit event container_id:\"083a2800a30c278f6ef8ea5c70a032187ebc01f0fbe3a16d2c0cddfe25105f42\" id:\"083a2800a30c278f6ef8ea5c70a032187ebc01f0fbe3a16d2c0cddfe25105f42\" pid:4466 exited_at:{seconds:1768917487 nanos:465046327}" Jan 20 13:58:07.484121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-083a2800a30c278f6ef8ea5c70a032187ebc01f0fbe3a16d2c0cddfe25105f42-rootfs.mount: Deactivated successfully. Jan 20 13:58:07.554597 kubelet[3709]: I0120 13:58:07.554564 3709 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 13:58:07.757945 kubelet[3709]: W0120 13:58:07.605150 3709 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-9999.1.1-f-6b32856eb5" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-9999.1.1-f-6b32856eb5' and this object Jan 20 13:58:07.757945 kubelet[3709]: E0120 13:58:07.605478 3709 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-9999.1.1-f-6b32856eb5\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-9999.1.1-f-6b32856eb5' and this object" logger="UnhandledError" Jan 20 13:58:07.757945 kubelet[3709]: W0120 13:58:07.605564 3709 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-9999.1.1-f-6b32856eb5" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-9999.1.1-f-6b32856eb5' and this object Jan 20 13:58:07.757945 kubelet[3709]: E0120 13:58:07.605580 3709 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-9999.1.1-f-6b32856eb5\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-9999.1.1-f-6b32856eb5' and this object" logger="UnhandledError" Jan 20 13:58:07.757945 kubelet[3709]: W0120 13:58:07.605615 3709 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-9999.1.1-f-6b32856eb5" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-9999.1.1-f-6b32856eb5' and this object Jan 20 13:58:07.600834 systemd[1]: Created slice kubepods-besteffort-podd927ba29_b3e9_4c16_90f8_d925c765f0ce.slice - libcontainer container kubepods-besteffort-podd927ba29_b3e9_4c16_90f8_d925c765f0ce.slice. Jan 20 13:58:07.758517 kubelet[3709]: E0120 13:58:07.605639 3709 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-9999.1.1-f-6b32856eb5\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-9999.1.1-f-6b32856eb5' and this object" logger="UnhandledError" Jan 20 13:58:07.758517 kubelet[3709]: I0120 13:58:07.743512 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85bd9e66-9697-4ce6-b203-5bf288af5ea8-config\") pod \"goldmane-666569f655-9ztjh\" (UID: \"85bd9e66-9697-4ce6-b203-5bf288af5ea8\") " pod="calico-system/goldmane-666569f655-9ztjh" Jan 20 13:58:07.758517 kubelet[3709]: I0120 13:58:07.743578 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6bc023f-2805-44ee-9145-9306204ad855-config-volume\") pod \"coredns-668d6bf9bc-9pmz2\" (UID: \"c6bc023f-2805-44ee-9145-9306204ad855\") " pod="kube-system/coredns-668d6bf9bc-9pmz2" Jan 20 13:58:07.758517 kubelet[3709]: I0120 13:58:07.743602 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rhtb\" (UniqueName: \"kubernetes.io/projected/9e279828-5731-43f9-9e3d-019d423f62e9-kube-api-access-7rhtb\") pod \"calico-apiserver-7dc74856cf-zx59p\" (UID: \"9e279828-5731-43f9-9e3d-019d423f62e9\") " pod="calico-apiserver/calico-apiserver-7dc74856cf-zx59p" Jan 20 13:58:07.758517 kubelet[3709]: I0120 13:58:07.743623 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/32c716f6-e8c9-410b-8d5d-21bab2f36497-whisker-backend-key-pair\") pod \"whisker-7955586cd8-bdj2l\" (UID: \"32c716f6-e8c9-410b-8d5d-21bab2f36497\") " pod="calico-system/whisker-7955586cd8-bdj2l" Jan 20 13:58:07.609439 systemd[1]: Created slice kubepods-burstable-pod1660a4b3_622a_43b1_b762_2fd404939ef1.slice - libcontainer container kubepods-burstable-pod1660a4b3_622a_43b1_b762_2fd404939ef1.slice. Jan 20 13:58:07.758649 kubelet[3709]: I0120 13:58:07.743657 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/85bd9e66-9697-4ce6-b203-5bf288af5ea8-goldmane-key-pair\") pod \"goldmane-666569f655-9ztjh\" (UID: \"85bd9e66-9697-4ce6-b203-5bf288af5ea8\") " pod="calico-system/goldmane-666569f655-9ztjh" Jan 20 13:58:07.758649 kubelet[3709]: I0120 13:58:07.743670 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xccw\" (UniqueName: \"kubernetes.io/projected/1660a4b3-622a-43b1-b762-2fd404939ef1-kube-api-access-8xccw\") pod \"coredns-668d6bf9bc-4dfzv\" (UID: \"1660a4b3-622a-43b1-b762-2fd404939ef1\") " pod="kube-system/coredns-668d6bf9bc-4dfzv" Jan 20 13:58:07.758649 kubelet[3709]: I0120 13:58:07.743681 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9e279828-5731-43f9-9e3d-019d423f62e9-calico-apiserver-certs\") pod \"calico-apiserver-7dc74856cf-zx59p\" (UID: \"9e279828-5731-43f9-9e3d-019d423f62e9\") " pod="calico-apiserver/calico-apiserver-7dc74856cf-zx59p" Jan 20 13:58:07.758649 kubelet[3709]: I0120 13:58:07.743744 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85bd9e66-9697-4ce6-b203-5bf288af5ea8-goldmane-ca-bundle\") pod \"goldmane-666569f655-9ztjh\" (UID: \"85bd9e66-9697-4ce6-b203-5bf288af5ea8\") " pod="calico-system/goldmane-666569f655-9ztjh" Jan 20 13:58:07.758649 kubelet[3709]: I0120 13:58:07.743757 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5c39ce3f-4fc4-445a-adab-faa3675de94e-calico-apiserver-certs\") pod \"calico-apiserver-7dc74856cf-7w8mb\" (UID: \"5c39ce3f-4fc4-445a-adab-faa3675de94e\") " pod="calico-apiserver/calico-apiserver-7dc74856cf-7w8mb" Jan 20 13:58:07.620016 systemd[1]: Created slice kubepods-besteffort-pod9e279828_5731_43f9_9e3d_019d423f62e9.slice - libcontainer container kubepods-besteffort-pod9e279828_5731_43f9_9e3d_019d423f62e9.slice. Jan 20 13:58:07.758757 kubelet[3709]: I0120 13:58:07.743772 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9rlh\" (UniqueName: \"kubernetes.io/projected/5c39ce3f-4fc4-445a-adab-faa3675de94e-kube-api-access-c9rlh\") pod \"calico-apiserver-7dc74856cf-7w8mb\" (UID: \"5c39ce3f-4fc4-445a-adab-faa3675de94e\") " pod="calico-apiserver/calico-apiserver-7dc74856cf-7w8mb" Jan 20 13:58:07.758757 kubelet[3709]: I0120 13:58:07.743813 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftvn6\" (UniqueName: \"kubernetes.io/projected/85bd9e66-9697-4ce6-b203-5bf288af5ea8-kube-api-access-ftvn6\") pod \"goldmane-666569f655-9ztjh\" (UID: \"85bd9e66-9697-4ce6-b203-5bf288af5ea8\") " pod="calico-system/goldmane-666569f655-9ztjh" Jan 20 13:58:07.758757 kubelet[3709]: I0120 13:58:07.743832 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj6nr\" (UniqueName: \"kubernetes.io/projected/d927ba29-b3e9-4c16-90f8-d925c765f0ce-kube-api-access-wj6nr\") pod \"calico-kube-controllers-68d86775db-xr92p\" (UID: \"d927ba29-b3e9-4c16-90f8-d925c765f0ce\") " pod="calico-system/calico-kube-controllers-68d86775db-xr92p" Jan 20 13:58:07.758757 kubelet[3709]: I0120 13:58:07.743844 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwqn6\" (UniqueName: \"kubernetes.io/projected/c6bc023f-2805-44ee-9145-9306204ad855-kube-api-access-bwqn6\") pod \"coredns-668d6bf9bc-9pmz2\" (UID: \"c6bc023f-2805-44ee-9145-9306204ad855\") " pod="kube-system/coredns-668d6bf9bc-9pmz2" Jan 20 13:58:07.758757 kubelet[3709]: I0120 13:58:07.743857 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1660a4b3-622a-43b1-b762-2fd404939ef1-config-volume\") pod \"coredns-668d6bf9bc-4dfzv\" (UID: \"1660a4b3-622a-43b1-b762-2fd404939ef1\") " pod="kube-system/coredns-668d6bf9bc-4dfzv" Jan 20 13:58:07.630707 systemd[1]: Created slice kubepods-besteffort-pod85bd9e66_9697_4ce6_b203_5bf288af5ea8.slice - libcontainer container kubepods-besteffort-pod85bd9e66_9697_4ce6_b203_5bf288af5ea8.slice. Jan 20 13:58:07.758898 kubelet[3709]: I0120 13:58:07.744034 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d927ba29-b3e9-4c16-90f8-d925c765f0ce-tigera-ca-bundle\") pod \"calico-kube-controllers-68d86775db-xr92p\" (UID: \"d927ba29-b3e9-4c16-90f8-d925c765f0ce\") " pod="calico-system/calico-kube-controllers-68d86775db-xr92p" Jan 20 13:58:07.758898 kubelet[3709]: I0120 13:58:07.744051 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32c716f6-e8c9-410b-8d5d-21bab2f36497-whisker-ca-bundle\") pod \"whisker-7955586cd8-bdj2l\" (UID: \"32c716f6-e8c9-410b-8d5d-21bab2f36497\") " pod="calico-system/whisker-7955586cd8-bdj2l" Jan 20 13:58:07.758898 kubelet[3709]: I0120 13:58:07.744062 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n77tw\" (UniqueName: \"kubernetes.io/projected/32c716f6-e8c9-410b-8d5d-21bab2f36497-kube-api-access-n77tw\") pod \"whisker-7955586cd8-bdj2l\" (UID: \"32c716f6-e8c9-410b-8d5d-21bab2f36497\") " pod="calico-system/whisker-7955586cd8-bdj2l" Jan 20 13:58:07.637504 systemd[1]: Created slice kubepods-burstable-podc6bc023f_2805_44ee_9145_9306204ad855.slice - libcontainer container kubepods-burstable-podc6bc023f_2805_44ee_9145_9306204ad855.slice. Jan 20 13:58:07.645074 systemd[1]: Created slice kubepods-besteffort-pod32c716f6_e8c9_410b_8d5d_21bab2f36497.slice - libcontainer container kubepods-besteffort-pod32c716f6_e8c9_410b_8d5d_21bab2f36497.slice. Jan 20 13:58:07.651298 systemd[1]: Created slice kubepods-besteffort-pod5c39ce3f_4fc4_445a_adab_faa3675de94e.slice - libcontainer container kubepods-besteffort-pod5c39ce3f_4fc4_445a_adab_faa3675de94e.slice. Jan 20 13:58:07.837436 systemd[1]: Created slice kubepods-besteffort-pod23535579_9237_4da3_a34f_0ccbbd6a2ee0.slice - libcontainer container kubepods-besteffort-pod23535579_9237_4da3_a34f_0ccbbd6a2ee0.slice. Jan 20 13:58:07.839610 containerd[2143]: time="2026-01-20T13:58:07.839574244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vhqxx,Uid:23535579-9237-4da3-a34f-0ccbbd6a2ee0,Namespace:calico-system,Attempt:0,}" Jan 20 13:58:08.043548 containerd[2143]: time="2026-01-20T13:58:08.043312184Z" level=error msg="Failed to destroy network for sandbox \"bc095ce76df82bd8680a0bb99296c5f898ec894b04881f210b094ba7871f7cfa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.049644 containerd[2143]: time="2026-01-20T13:58:08.049583074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vhqxx,Uid:23535579-9237-4da3-a34f-0ccbbd6a2ee0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc095ce76df82bd8680a0bb99296c5f898ec894b04881f210b094ba7871f7cfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.050011 kubelet[3709]: E0120 13:58:08.049886 3709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc095ce76df82bd8680a0bb99296c5f898ec894b04881f210b094ba7871f7cfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.050011 kubelet[3709]: E0120 13:58:08.049970 3709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc095ce76df82bd8680a0bb99296c5f898ec894b04881f210b094ba7871f7cfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vhqxx" Jan 20 13:58:08.050011 kubelet[3709]: E0120 13:58:08.049987 3709 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc095ce76df82bd8680a0bb99296c5f898ec894b04881f210b094ba7871f7cfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vhqxx" Jan 20 13:58:08.050169 kubelet[3709]: E0120 13:58:08.050144 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vhqxx_calico-system(23535579-9237-4da3-a34f-0ccbbd6a2ee0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vhqxx_calico-system(23535579-9237-4da3-a34f-0ccbbd6a2ee0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc095ce76df82bd8680a0bb99296c5f898ec894b04881f210b094ba7871f7cfa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vhqxx" podUID="23535579-9237-4da3-a34f-0ccbbd6a2ee0" Jan 20 13:58:08.058969 containerd[2143]: time="2026-01-20T13:58:08.058927571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d86775db-xr92p,Uid:d927ba29-b3e9-4c16-90f8-d925c765f0ce,Namespace:calico-system,Attempt:0,}" Jan 20 13:58:08.059496 containerd[2143]: time="2026-01-20T13:58:08.059465987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7955586cd8-bdj2l,Uid:32c716f6-e8c9-410b-8d5d-21bab2f36497,Namespace:calico-system,Attempt:0,}" Jan 20 13:58:08.062912 containerd[2143]: time="2026-01-20T13:58:08.062878733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9ztjh,Uid:85bd9e66-9697-4ce6-b203-5bf288af5ea8,Namespace:calico-system,Attempt:0,}" Jan 20 13:58:08.128670 containerd[2143]: time="2026-01-20T13:58:08.128555819Z" level=error msg="Failed to destroy network for sandbox \"2fc8f6d9ba90dcddad7ea2731208e586cb96beb73e39aa4c39c999bd46d66cf4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.135776 containerd[2143]: time="2026-01-20T13:58:08.135654519Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d86775db-xr92p,Uid:d927ba29-b3e9-4c16-90f8-d925c765f0ce,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fc8f6d9ba90dcddad7ea2731208e586cb96beb73e39aa4c39c999bd46d66cf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.135927 kubelet[3709]: E0120 13:58:08.135881 3709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fc8f6d9ba90dcddad7ea2731208e586cb96beb73e39aa4c39c999bd46d66cf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.135967 kubelet[3709]: E0120 13:58:08.135949 3709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fc8f6d9ba90dcddad7ea2731208e586cb96beb73e39aa4c39c999bd46d66cf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d86775db-xr92p" Jan 20 13:58:08.135986 kubelet[3709]: E0120 13:58:08.135965 3709 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fc8f6d9ba90dcddad7ea2731208e586cb96beb73e39aa4c39c999bd46d66cf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d86775db-xr92p" Jan 20 13:58:08.136871 kubelet[3709]: E0120 13:58:08.136001 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68d86775db-xr92p_calico-system(d927ba29-b3e9-4c16-90f8-d925c765f0ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68d86775db-xr92p_calico-system(d927ba29-b3e9-4c16-90f8-d925c765f0ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fc8f6d9ba90dcddad7ea2731208e586cb96beb73e39aa4c39c999bd46d66cf4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d86775db-xr92p" podUID="d927ba29-b3e9-4c16-90f8-d925c765f0ce" Jan 20 13:58:08.151909 containerd[2143]: time="2026-01-20T13:58:08.151874669Z" level=error msg="Failed to destroy network for sandbox \"c78c0c5952ecfdff0b14d9f6e57c08c235d5dbe9cf4c6ef7d150e3f48799286d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.155555 containerd[2143]: time="2026-01-20T13:58:08.155520997Z" level=error msg="Failed to destroy network for sandbox \"05bd32221d96bbf75647d8a8a6f616dab83216fdb8ba64c9cb11545c61567325\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.157897 containerd[2143]: time="2026-01-20T13:58:08.157714737Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7955586cd8-bdj2l,Uid:32c716f6-e8c9-410b-8d5d-21bab2f36497,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c78c0c5952ecfdff0b14d9f6e57c08c235d5dbe9cf4c6ef7d150e3f48799286d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.158244 kubelet[3709]: E0120 13:58:08.158190 3709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c78c0c5952ecfdff0b14d9f6e57c08c235d5dbe9cf4c6ef7d150e3f48799286d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.158420 kubelet[3709]: E0120 13:58:08.158321 3709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c78c0c5952ecfdff0b14d9f6e57c08c235d5dbe9cf4c6ef7d150e3f48799286d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7955586cd8-bdj2l" Jan 20 13:58:08.158420 kubelet[3709]: E0120 13:58:08.158341 3709 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c78c0c5952ecfdff0b14d9f6e57c08c235d5dbe9cf4c6ef7d150e3f48799286d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7955586cd8-bdj2l" Jan 20 13:58:08.158591 kubelet[3709]: E0120 13:58:08.158552 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7955586cd8-bdj2l_calico-system(32c716f6-e8c9-410b-8d5d-21bab2f36497)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7955586cd8-bdj2l_calico-system(32c716f6-e8c9-410b-8d5d-21bab2f36497)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c78c0c5952ecfdff0b14d9f6e57c08c235d5dbe9cf4c6ef7d150e3f48799286d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7955586cd8-bdj2l" podUID="32c716f6-e8c9-410b-8d5d-21bab2f36497" Jan 20 13:58:08.163713 containerd[2143]: time="2026-01-20T13:58:08.163650185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9ztjh,Uid:85bd9e66-9697-4ce6-b203-5bf288af5ea8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"05bd32221d96bbf75647d8a8a6f616dab83216fdb8ba64c9cb11545c61567325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.163929 kubelet[3709]: E0120 13:58:08.163891 3709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05bd32221d96bbf75647d8a8a6f616dab83216fdb8ba64c9cb11545c61567325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.164002 kubelet[3709]: E0120 13:58:08.163943 3709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05bd32221d96bbf75647d8a8a6f616dab83216fdb8ba64c9cb11545c61567325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9ztjh" Jan 20 13:58:08.164002 kubelet[3709]: E0120 13:58:08.163958 3709 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05bd32221d96bbf75647d8a8a6f616dab83216fdb8ba64c9cb11545c61567325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9ztjh" Jan 20 13:58:08.164002 kubelet[3709]: E0120 13:58:08.163987 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-9ztjh_calico-system(85bd9e66-9697-4ce6-b203-5bf288af5ea8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-9ztjh_calico-system(85bd9e66-9697-4ce6-b203-5bf288af5ea8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05bd32221d96bbf75647d8a8a6f616dab83216fdb8ba64c9cb11545c61567325\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-9ztjh" podUID="85bd9e66-9697-4ce6-b203-5bf288af5ea8" Jan 20 13:58:08.659360 containerd[2143]: time="2026-01-20T13:58:08.659318679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc74856cf-7w8mb,Uid:5c39ce3f-4fc4-445a-adab-faa3675de94e,Namespace:calico-apiserver,Attempt:0,}" Jan 20 13:58:08.659719 containerd[2143]: time="2026-01-20T13:58:08.659570263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9pmz2,Uid:c6bc023f-2805-44ee-9145-9306204ad855,Namespace:kube-system,Attempt:0,}" Jan 20 13:58:08.663371 containerd[2143]: time="2026-01-20T13:58:08.663345540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4dfzv,Uid:1660a4b3-622a-43b1-b762-2fd404939ef1,Namespace:kube-system,Attempt:0,}" Jan 20 13:58:08.663605 containerd[2143]: time="2026-01-20T13:58:08.663578315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc74856cf-zx59p,Uid:9e279828-5731-43f9-9e3d-019d423f62e9,Namespace:calico-apiserver,Attempt:0,}" Jan 20 13:58:08.747864 containerd[2143]: time="2026-01-20T13:58:08.747815399Z" level=error msg="Failed to destroy network for sandbox \"8425aebdd64f5ca8e54db4503938c8891ce47198880c0b46976106bb23b4b232\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.754838 containerd[2143]: time="2026-01-20T13:58:08.754768974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc74856cf-7w8mb,Uid:5c39ce3f-4fc4-445a-adab-faa3675de94e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8425aebdd64f5ca8e54db4503938c8891ce47198880c0b46976106bb23b4b232\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.755542 kubelet[3709]: E0120 13:58:08.754997 3709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8425aebdd64f5ca8e54db4503938c8891ce47198880c0b46976106bb23b4b232\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.755542 kubelet[3709]: E0120 13:58:08.755051 3709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8425aebdd64f5ca8e54db4503938c8891ce47198880c0b46976106bb23b4b232\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc74856cf-7w8mb" Jan 20 13:58:08.755542 kubelet[3709]: E0120 13:58:08.755066 3709 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8425aebdd64f5ca8e54db4503938c8891ce47198880c0b46976106bb23b4b232\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc74856cf-7w8mb" Jan 20 13:58:08.756011 kubelet[3709]: E0120 13:58:08.755106 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dc74856cf-7w8mb_calico-apiserver(5c39ce3f-4fc4-445a-adab-faa3675de94e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dc74856cf-7w8mb_calico-apiserver(5c39ce3f-4fc4-445a-adab-faa3675de94e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8425aebdd64f5ca8e54db4503938c8891ce47198880c0b46976106bb23b4b232\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-7w8mb" podUID="5c39ce3f-4fc4-445a-adab-faa3675de94e" Jan 20 13:58:08.767253 containerd[2143]: time="2026-01-20T13:58:08.767117852Z" level=error msg="Failed to destroy network for sandbox \"66ceebb7b23ac4094a47d155420e10ceb178e917b976297666e2cd1062a0ab18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.775014 containerd[2143]: time="2026-01-20T13:58:08.774933582Z" level=error msg="Failed to destroy network for sandbox \"ee6b5684c10920177c7db2233451663f9b5871435f985a1fcbf5a4cc63cd16f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.776402 containerd[2143]: time="2026-01-20T13:58:08.775772448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4dfzv,Uid:1660a4b3-622a-43b1-b762-2fd404939ef1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ceebb7b23ac4094a47d155420e10ceb178e917b976297666e2cd1062a0ab18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.776805 kubelet[3709]: E0120 13:58:08.776663 3709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ceebb7b23ac4094a47d155420e10ceb178e917b976297666e2cd1062a0ab18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.776881 kubelet[3709]: E0120 13:58:08.776826 3709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ceebb7b23ac4094a47d155420e10ceb178e917b976297666e2cd1062a0ab18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4dfzv" Jan 20 13:58:08.776881 kubelet[3709]: E0120 13:58:08.776842 3709 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ceebb7b23ac4094a47d155420e10ceb178e917b976297666e2cd1062a0ab18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4dfzv" Jan 20 13:58:08.776927 kubelet[3709]: E0120 13:58:08.776876 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4dfzv_kube-system(1660a4b3-622a-43b1-b762-2fd404939ef1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4dfzv_kube-system(1660a4b3-622a-43b1-b762-2fd404939ef1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66ceebb7b23ac4094a47d155420e10ceb178e917b976297666e2cd1062a0ab18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4dfzv" podUID="1660a4b3-622a-43b1-b762-2fd404939ef1" Jan 20 13:58:08.781708 containerd[2143]: time="2026-01-20T13:58:08.781661590Z" level=error msg="Failed to destroy network for sandbox \"e85dc89ecbc6ccd19a243cc9ffd47c234174fcae03974963685f98808fa426f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.783176 containerd[2143]: time="2026-01-20T13:58:08.782943422Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9pmz2,Uid:c6bc023f-2805-44ee-9145-9306204ad855,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee6b5684c10920177c7db2233451663f9b5871435f985a1fcbf5a4cc63cd16f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.783422 kubelet[3709]: E0120 13:58:08.783323 3709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee6b5684c10920177c7db2233451663f9b5871435f985a1fcbf5a4cc63cd16f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.783422 kubelet[3709]: E0120 13:58:08.783369 3709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee6b5684c10920177c7db2233451663f9b5871435f985a1fcbf5a4cc63cd16f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9pmz2" Jan 20 13:58:08.783757 kubelet[3709]: E0120 13:58:08.783725 3709 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee6b5684c10920177c7db2233451663f9b5871435f985a1fcbf5a4cc63cd16f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9pmz2" Jan 20 13:58:08.783836 kubelet[3709]: E0120 13:58:08.783806 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9pmz2_kube-system(c6bc023f-2805-44ee-9145-9306204ad855)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9pmz2_kube-system(c6bc023f-2805-44ee-9145-9306204ad855)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee6b5684c10920177c7db2233451663f9b5871435f985a1fcbf5a4cc63cd16f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9pmz2" podUID="c6bc023f-2805-44ee-9145-9306204ad855" Jan 20 13:58:08.791468 containerd[2143]: time="2026-01-20T13:58:08.791433492Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc74856cf-zx59p,Uid:9e279828-5731-43f9-9e3d-019d423f62e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e85dc89ecbc6ccd19a243cc9ffd47c234174fcae03974963685f98808fa426f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.791615 kubelet[3709]: E0120 13:58:08.791588 3709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e85dc89ecbc6ccd19a243cc9ffd47c234174fcae03974963685f98808fa426f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 13:58:08.791666 kubelet[3709]: E0120 13:58:08.791624 3709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e85dc89ecbc6ccd19a243cc9ffd47c234174fcae03974963685f98808fa426f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc74856cf-zx59p" Jan 20 13:58:08.791666 kubelet[3709]: E0120 13:58:08.791641 3709 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e85dc89ecbc6ccd19a243cc9ffd47c234174fcae03974963685f98808fa426f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc74856cf-zx59p" Jan 20 13:58:08.791708 kubelet[3709]: E0120 13:58:08.791675 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dc74856cf-zx59p_calico-apiserver(9e279828-5731-43f9-9e3d-019d423f62e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dc74856cf-zx59p_calico-apiserver(9e279828-5731-43f9-9e3d-019d423f62e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e85dc89ecbc6ccd19a243cc9ffd47c234174fcae03974963685f98808fa426f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-zx59p" podUID="9e279828-5731-43f9-9e3d-019d423f62e9" Jan 20 13:58:08.955133 containerd[2143]: time="2026-01-20T13:58:08.955024638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 13:58:14.766779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1552896810.mount: Deactivated successfully. Jan 20 13:58:15.074330 containerd[2143]: time="2026-01-20T13:58:15.073582108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:58:15.077241 containerd[2143]: time="2026-01-20T13:58:15.077190277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 20 13:58:15.080064 containerd[2143]: time="2026-01-20T13:58:15.080034262Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:58:15.083977 containerd[2143]: time="2026-01-20T13:58:15.083944616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 13:58:15.084376 containerd[2143]: time="2026-01-20T13:58:15.084351845Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.129291693s" Jan 20 13:58:15.084410 containerd[2143]: time="2026-01-20T13:58:15.084381406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 20 13:58:15.098913 containerd[2143]: time="2026-01-20T13:58:15.098879290Z" level=info msg="CreateContainer within sandbox \"11628a46ea0ffa07fa72e9a8678988b411a114869fd7fddb52686e61979b03e9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 13:58:15.123784 containerd[2143]: time="2026-01-20T13:58:15.122938418Z" level=info msg="Container b347ec6fee42a998ca8ff7bd5d928b2a81659a80ebc450bc3a168901323c0ea5: CDI devices from CRI Config.CDIDevices: []" Jan 20 13:58:15.125792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2190655826.mount: Deactivated successfully. Jan 20 13:58:15.141248 containerd[2143]: time="2026-01-20T13:58:15.141211388Z" level=info msg="CreateContainer within sandbox \"11628a46ea0ffa07fa72e9a8678988b411a114869fd7fddb52686e61979b03e9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b347ec6fee42a998ca8ff7bd5d928b2a81659a80ebc450bc3a168901323c0ea5\"" Jan 20 13:58:15.141765 containerd[2143]: time="2026-01-20T13:58:15.141731445Z" level=info msg="StartContainer for \"b347ec6fee42a998ca8ff7bd5d928b2a81659a80ebc450bc3a168901323c0ea5\"" Jan 20 13:58:15.146552 containerd[2143]: time="2026-01-20T13:58:15.146496434Z" level=info msg="connecting to shim b347ec6fee42a998ca8ff7bd5d928b2a81659a80ebc450bc3a168901323c0ea5" address="unix:///run/containerd/s/d1a48f0d8c9aaf4096faacbee1d3ed51db0ac27a6b6985d1c5c41c4bb5f9ffc9" protocol=ttrpc version=3 Jan 20 13:58:15.161537 systemd[1]: Started cri-containerd-b347ec6fee42a998ca8ff7bd5d928b2a81659a80ebc450bc3a168901323c0ea5.scope - libcontainer container b347ec6fee42a998ca8ff7bd5d928b2a81659a80ebc450bc3a168901323c0ea5. Jan 20 13:58:15.204000 audit: BPF prog-id=196 op=LOAD Jan 20 13:58:15.207699 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 20 13:58:15.207767 kernel: audit: type=1334 audit(1768917495.204:606): prog-id=196 op=LOAD Jan 20 13:58:15.204000 audit[4725]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4194 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:15.228610 kernel: audit: type=1300 audit(1768917495.204:606): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4194 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:15.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343765633666656534326139393863613866663762643564393238 Jan 20 13:58:15.244663 kernel: audit: type=1327 audit(1768917495.204:606): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343765633666656534326139393863613866663762643564393238 Jan 20 13:58:15.204000 audit: BPF prog-id=197 op=LOAD Jan 20 13:58:15.249397 kernel: audit: type=1334 audit(1768917495.204:607): prog-id=197 op=LOAD Jan 20 13:58:15.204000 audit[4725]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4194 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:15.265633 kernel: audit: type=1300 audit(1768917495.204:607): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4194 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:15.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343765633666656534326139393863613866663762643564393238 Jan 20 13:58:15.282240 kernel: audit: type=1327 audit(1768917495.204:607): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343765633666656534326139393863613866663762643564393238 Jan 20 13:58:15.207000 audit: BPF prog-id=197 op=UNLOAD Jan 20 13:58:15.289922 kernel: audit: type=1334 audit(1768917495.207:608): prog-id=197 op=UNLOAD Jan 20 13:58:15.207000 audit[4725]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4194 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:15.305547 kernel: audit: type=1300 audit(1768917495.207:608): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4194 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:15.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343765633666656534326139393863613866663762643564393238 Jan 20 13:58:15.322342 kernel: audit: type=1327 audit(1768917495.207:608): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343765633666656534326139393863613866663762643564393238 Jan 20 13:58:15.207000 audit: BPF prog-id=196 op=UNLOAD Jan 20 13:58:15.327460 kernel: audit: type=1334 audit(1768917495.207:609): prog-id=196 op=UNLOAD Jan 20 13:58:15.207000 audit[4725]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4194 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:15.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343765633666656534326139393863613866663762643564393238 Jan 20 13:58:15.207000 audit: BPF prog-id=198 op=LOAD Jan 20 13:58:15.207000 audit[4725]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4194 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:15.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343765633666656534326139393863613866663762643564393238 Jan 20 13:58:15.331928 containerd[2143]: time="2026-01-20T13:58:15.331896303Z" level=info msg="StartContainer for \"b347ec6fee42a998ca8ff7bd5d928b2a81659a80ebc450bc3a168901323c0ea5\" returns successfully" Jan 20 13:58:15.534953 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 13:58:15.535072 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 13:58:15.692373 kubelet[3709]: I0120 13:58:15.692340 3709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/32c716f6-e8c9-410b-8d5d-21bab2f36497-whisker-backend-key-pair\") pod \"32c716f6-e8c9-410b-8d5d-21bab2f36497\" (UID: \"32c716f6-e8c9-410b-8d5d-21bab2f36497\") " Jan 20 13:58:15.692729 kubelet[3709]: I0120 13:58:15.692424 3709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n77tw\" (UniqueName: \"kubernetes.io/projected/32c716f6-e8c9-410b-8d5d-21bab2f36497-kube-api-access-n77tw\") pod \"32c716f6-e8c9-410b-8d5d-21bab2f36497\" (UID: \"32c716f6-e8c9-410b-8d5d-21bab2f36497\") " Jan 20 13:58:15.692729 kubelet[3709]: I0120 13:58:15.692450 3709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32c716f6-e8c9-410b-8d5d-21bab2f36497-whisker-ca-bundle\") pod \"32c716f6-e8c9-410b-8d5d-21bab2f36497\" (UID: \"32c716f6-e8c9-410b-8d5d-21bab2f36497\") " Jan 20 13:58:15.702514 kubelet[3709]: I0120 13:58:15.702477 3709 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32c716f6-e8c9-410b-8d5d-21bab2f36497-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "32c716f6-e8c9-410b-8d5d-21bab2f36497" (UID: "32c716f6-e8c9-410b-8d5d-21bab2f36497"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 13:58:15.705331 kubelet[3709]: I0120 13:58:15.705301 3709 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32c716f6-e8c9-410b-8d5d-21bab2f36497-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "32c716f6-e8c9-410b-8d5d-21bab2f36497" (UID: "32c716f6-e8c9-410b-8d5d-21bab2f36497"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 13:58:15.706127 kubelet[3709]: I0120 13:58:15.706098 3709 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32c716f6-e8c9-410b-8d5d-21bab2f36497-kube-api-access-n77tw" (OuterVolumeSpecName: "kube-api-access-n77tw") pod "32c716f6-e8c9-410b-8d5d-21bab2f36497" (UID: "32c716f6-e8c9-410b-8d5d-21bab2f36497"). InnerVolumeSpecName "kube-api-access-n77tw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 13:58:15.768221 systemd[1]: var-lib-kubelet-pods-32c716f6\x2de8c9\x2d410b\x2d8d5d\x2d21bab2f36497-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn77tw.mount: Deactivated successfully. Jan 20 13:58:15.768305 systemd[1]: var-lib-kubelet-pods-32c716f6\x2de8c9\x2d410b\x2d8d5d\x2d21bab2f36497-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 13:58:15.792711 kubelet[3709]: I0120 13:58:15.792660 3709 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32c716f6-e8c9-410b-8d5d-21bab2f36497-whisker-ca-bundle\") on node \"ci-9999.1.1-f-6b32856eb5\" DevicePath \"\"" Jan 20 13:58:15.792711 kubelet[3709]: I0120 13:58:15.792694 3709 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/32c716f6-e8c9-410b-8d5d-21bab2f36497-whisker-backend-key-pair\") on node \"ci-9999.1.1-f-6b32856eb5\" DevicePath \"\"" Jan 20 13:58:15.792711 kubelet[3709]: I0120 13:58:15.792704 3709 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n77tw\" (UniqueName: \"kubernetes.io/projected/32c716f6-e8c9-410b-8d5d-21bab2f36497-kube-api-access-n77tw\") on node \"ci-9999.1.1-f-6b32856eb5\" DevicePath \"\"" Jan 20 13:58:15.838419 systemd[1]: Removed slice kubepods-besteffort-pod32c716f6_e8c9_410b_8d5d_21bab2f36497.slice - libcontainer container kubepods-besteffort-pod32c716f6_e8c9_410b_8d5d_21bab2f36497.slice. Jan 20 13:58:15.953926 kubelet[3709]: I0120 13:58:15.953749 3709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 13:58:15.999377 kubelet[3709]: I0120 13:58:15.999318 3709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4s4s6" podStartSLOduration=1.399735664 podStartE2EDuration="19.999303614s" podCreationTimestamp="2026-01-20 13:57:56 +0000 UTC" firstStartedPulling="2026-01-20 13:57:56.485637841 +0000 UTC m=+22.719079032" lastFinishedPulling="2026-01-20 13:58:15.085205791 +0000 UTC m=+41.318646982" observedRunningTime="2026-01-20 13:58:15.999181211 +0000 UTC m=+42.232622410" watchObservedRunningTime="2026-01-20 13:58:15.999303614 +0000 UTC m=+42.232744805" Jan 20 13:58:16.002000 audit[4786]: NETFILTER_CFG table=filter:120 family=2 entries=21 op=nft_register_rule pid=4786 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:58:16.002000 audit[4786]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc53f78a0 a2=0 a3=1 items=0 ppid=3859 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:16.002000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:58:16.008000 audit[4786]: NETFILTER_CFG table=nat:121 family=2 entries=19 op=nft_register_chain pid=4786 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:58:16.008000 audit[4786]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffc53f78a0 a2=0 a3=1 items=0 ppid=3859 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:16.008000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:58:16.077716 systemd[1]: Created slice kubepods-besteffort-podf1bb23d8_4b1f_4aa2_8b63_89adfd023536.slice - libcontainer container kubepods-besteffort-podf1bb23d8_4b1f_4aa2_8b63_89adfd023536.slice. Jan 20 13:58:16.094914 kubelet[3709]: I0120 13:58:16.094880 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6tsq\" (UniqueName: \"kubernetes.io/projected/f1bb23d8-4b1f-4aa2-8b63-89adfd023536-kube-api-access-q6tsq\") pod \"whisker-77799d95d6-2jfd5\" (UID: \"f1bb23d8-4b1f-4aa2-8b63-89adfd023536\") " pod="calico-system/whisker-77799d95d6-2jfd5" Jan 20 13:58:16.094914 kubelet[3709]: I0120 13:58:16.094914 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1bb23d8-4b1f-4aa2-8b63-89adfd023536-whisker-ca-bundle\") pod \"whisker-77799d95d6-2jfd5\" (UID: \"f1bb23d8-4b1f-4aa2-8b63-89adfd023536\") " pod="calico-system/whisker-77799d95d6-2jfd5" Jan 20 13:58:16.095054 kubelet[3709]: I0120 13:58:16.094950 3709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1bb23d8-4b1f-4aa2-8b63-89adfd023536-whisker-backend-key-pair\") pod \"whisker-77799d95d6-2jfd5\" (UID: \"f1bb23d8-4b1f-4aa2-8b63-89adfd023536\") " pod="calico-system/whisker-77799d95d6-2jfd5" Jan 20 13:58:16.381205 containerd[2143]: time="2026-01-20T13:58:16.380990293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77799d95d6-2jfd5,Uid:f1bb23d8-4b1f-4aa2-8b63-89adfd023536,Namespace:calico-system,Attempt:0,}" Jan 20 13:58:16.524108 systemd-networkd[1707]: cali22c1d4483f3: Link UP Jan 20 13:58:16.524259 systemd-networkd[1707]: cali22c1d4483f3: Gained carrier Jan 20 13:58:16.538792 containerd[2143]: 2026-01-20 13:58:16.408 [INFO][4790] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 13:58:16.538792 containerd[2143]: 2026-01-20 13:58:16.451 [INFO][4790] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.1.1--f--6b32856eb5-k8s-whisker--77799d95d6--2jfd5-eth0 whisker-77799d95d6- calico-system f1bb23d8-4b1f-4aa2-8b63-89adfd023536 875 0 2026-01-20 13:58:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77799d95d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-9999.1.1-f-6b32856eb5 whisker-77799d95d6-2jfd5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali22c1d4483f3 [] [] }} ContainerID="c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251" Namespace="calico-system" Pod="whisker-77799d95d6-2jfd5" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-whisker--77799d95d6--2jfd5-" Jan 20 13:58:16.538792 containerd[2143]: 2026-01-20 13:58:16.451 [INFO][4790] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251" Namespace="calico-system" Pod="whisker-77799d95d6-2jfd5" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-whisker--77799d95d6--2jfd5-eth0" Jan 20 13:58:16.538792 containerd[2143]: 2026-01-20 13:58:16.470 [INFO][4802] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251" HandleID="k8s-pod-network.c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251" Workload="ci--9999.1.1--f--6b32856eb5-k8s-whisker--77799d95d6--2jfd5-eth0" Jan 20 13:58:16.538984 containerd[2143]: 2026-01-20 13:58:16.470 [INFO][4802] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251" HandleID="k8s-pod-network.c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251" Workload="ci--9999.1.1--f--6b32856eb5-k8s-whisker--77799d95d6--2jfd5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b1a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999.1.1-f-6b32856eb5", "pod":"whisker-77799d95d6-2jfd5", "timestamp":"2026-01-20 13:58:16.470296953 +0000 UTC"}, Hostname:"ci-9999.1.1-f-6b32856eb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 13:58:16.538984 containerd[2143]: 2026-01-20 13:58:16.470 [INFO][4802] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 13:58:16.538984 containerd[2143]: 2026-01-20 13:58:16.470 [INFO][4802] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 13:58:16.538984 containerd[2143]: 2026-01-20 13:58:16.470 [INFO][4802] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.1.1-f-6b32856eb5' Jan 20 13:58:16.538984 containerd[2143]: 2026-01-20 13:58:16.476 [INFO][4802] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:16.538984 containerd[2143]: 2026-01-20 13:58:16.479 [INFO][4802] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:16.538984 containerd[2143]: 2026-01-20 13:58:16.482 [INFO][4802] ipam/ipam.go 511: Trying affinity for 192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:16.538984 containerd[2143]: 2026-01-20 13:58:16.483 [INFO][4802] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:16.538984 containerd[2143]: 2026-01-20 13:58:16.484 [INFO][4802] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:16.539126 containerd[2143]: 2026-01-20 13:58:16.484 [INFO][4802] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:16.539126 containerd[2143]: 2026-01-20 13:58:16.485 [INFO][4802] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251 Jan 20 13:58:16.539126 containerd[2143]: 2026-01-20 13:58:16.493 [INFO][4802] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:16.539126 containerd[2143]: 2026-01-20 13:58:16.498 [INFO][4802] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.65/26] block=192.168.36.64/26 handle="k8s-pod-network.c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:16.539126 containerd[2143]: 2026-01-20 13:58:16.498 [INFO][4802] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.65/26] handle="k8s-pod-network.c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:16.539126 containerd[2143]: 2026-01-20 13:58:16.498 [INFO][4802] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 13:58:16.539126 containerd[2143]: 2026-01-20 13:58:16.498 [INFO][4802] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.65/26] IPv6=[] ContainerID="c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251" HandleID="k8s-pod-network.c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251" Workload="ci--9999.1.1--f--6b32856eb5-k8s-whisker--77799d95d6--2jfd5-eth0" Jan 20 13:58:16.539307 containerd[2143]: 2026-01-20 13:58:16.501 [INFO][4790] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251" Namespace="calico-system" Pod="whisker-77799d95d6-2jfd5" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-whisker--77799d95d6--2jfd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.1.1--f--6b32856eb5-k8s-whisker--77799d95d6--2jfd5-eth0", GenerateName:"whisker-77799d95d6-", Namespace:"calico-system", SelfLink:"", UID:"f1bb23d8-4b1f-4aa2-8b63-89adfd023536", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 13, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77799d95d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.1.1-f-6b32856eb5", ContainerID:"", Pod:"whisker-77799d95d6-2jfd5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.36.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali22c1d4483f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 13:58:16.539307 containerd[2143]: 2026-01-20 13:58:16.501 [INFO][4790] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.65/32] ContainerID="c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251" Namespace="calico-system" Pod="whisker-77799d95d6-2jfd5" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-whisker--77799d95d6--2jfd5-eth0" Jan 20 13:58:16.539404 containerd[2143]: 2026-01-20 13:58:16.501 [INFO][4790] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22c1d4483f3 ContainerID="c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251" Namespace="calico-system" Pod="whisker-77799d95d6-2jfd5" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-whisker--77799d95d6--2jfd5-eth0" Jan 20 13:58:16.539404 containerd[2143]: 2026-01-20 13:58:16.523 [INFO][4790] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251" Namespace="calico-system" Pod="whisker-77799d95d6-2jfd5" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-whisker--77799d95d6--2jfd5-eth0" Jan 20 13:58:16.539433 containerd[2143]: 2026-01-20 13:58:16.523 [INFO][4790] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251" Namespace="calico-system" Pod="whisker-77799d95d6-2jfd5" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-whisker--77799d95d6--2jfd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.1.1--f--6b32856eb5-k8s-whisker--77799d95d6--2jfd5-eth0", GenerateName:"whisker-77799d95d6-", Namespace:"calico-system", SelfLink:"", UID:"f1bb23d8-4b1f-4aa2-8b63-89adfd023536", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 13, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77799d95d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.1.1-f-6b32856eb5", ContainerID:"c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251", Pod:"whisker-77799d95d6-2jfd5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.36.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali22c1d4483f3", MAC:"ea:ff:d8:db:31:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 13:58:16.539485 containerd[2143]: 2026-01-20 13:58:16.537 [INFO][4790] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251" Namespace="calico-system" Pod="whisker-77799d95d6-2jfd5" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-whisker--77799d95d6--2jfd5-eth0" Jan 20 13:58:16.574805 containerd[2143]: time="2026-01-20T13:58:16.574752895Z" level=info msg="connecting to shim c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251" address="unix:///run/containerd/s/bd767f7b7173be3d8fb742dc41ac41bf4139093485b64b44aac80c86764be85a" namespace=k8s.io protocol=ttrpc version=3 Jan 20 13:58:16.598584 systemd[1]: Started cri-containerd-c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251.scope - libcontainer container c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251. Jan 20 13:58:16.607000 audit: BPF prog-id=199 op=LOAD Jan 20 13:58:16.607000 audit: BPF prog-id=200 op=LOAD Jan 20 13:58:16.607000 audit[4835]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=4825 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:16.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334653038613535343939633732663365393266396333363937663064 Jan 20 13:58:16.607000 audit: BPF prog-id=200 op=UNLOAD Jan 20 13:58:16.607000 audit[4835]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4825 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:16.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334653038613535343939633732663365393266396333363937663064 Jan 20 13:58:16.607000 audit: BPF prog-id=201 op=LOAD Jan 20 13:58:16.607000 audit[4835]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4825 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:16.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334653038613535343939633732663365393266396333363937663064 Jan 20 13:58:16.607000 audit: BPF prog-id=202 op=LOAD Jan 20 13:58:16.607000 audit[4835]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4825 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:16.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334653038613535343939633732663365393266396333363937663064 Jan 20 13:58:16.607000 audit: BPF prog-id=202 op=UNLOAD Jan 20 13:58:16.607000 audit[4835]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4825 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:16.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334653038613535343939633732663365393266396333363937663064 Jan 20 13:58:16.607000 audit: BPF prog-id=201 op=UNLOAD Jan 20 13:58:16.607000 audit[4835]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4825 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:16.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334653038613535343939633732663365393266396333363937663064 Jan 20 13:58:16.607000 audit: BPF prog-id=203 op=LOAD Jan 20 13:58:16.607000 audit[4835]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4825 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:16.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334653038613535343939633732663365393266396333363937663064 Jan 20 13:58:16.630963 containerd[2143]: time="2026-01-20T13:58:16.630911640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77799d95d6-2jfd5,Uid:f1bb23d8-4b1f-4aa2-8b63-89adfd023536,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4e08a55499c72f3e92f9c3697f0de2197f965f7e0a87fbac8cce18c04129251\"" Jan 20 13:58:16.633229 containerd[2143]: time="2026-01-20T13:58:16.633145958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 13:58:16.919289 containerd[2143]: time="2026-01-20T13:58:16.919155809Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:58:17.103000 audit: BPF prog-id=204 op=LOAD Jan 20 13:58:17.103000 audit[4985]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd238cc68 a2=98 a3=ffffd238cc58 items=0 ppid=4905 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.103000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 13:58:17.103000 audit: BPF prog-id=204 op=UNLOAD Jan 20 13:58:17.103000 audit[4985]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd238cc38 a3=0 items=0 ppid=4905 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.103000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 13:58:17.103000 audit: BPF prog-id=205 op=LOAD Jan 20 13:58:17.103000 audit[4985]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd238cb18 a2=74 a3=95 items=0 ppid=4905 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.103000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 13:58:17.103000 audit: BPF prog-id=205 op=UNLOAD Jan 20 13:58:17.103000 audit[4985]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4905 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.103000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 13:58:17.103000 audit: BPF prog-id=206 op=LOAD Jan 20 13:58:17.103000 audit[4985]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd238cb48 a2=40 a3=ffffd238cb78 items=0 ppid=4905 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.103000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 13:58:17.103000 audit: BPF prog-id=206 op=UNLOAD Jan 20 13:58:17.103000 audit[4985]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffd238cb78 items=0 ppid=4905 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.103000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 13:58:17.105749 containerd[2143]: time="2026-01-20T13:58:17.105701298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 13:58:17.105957 containerd[2143]: time="2026-01-20T13:58:17.105786524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 13:58:17.106075 kubelet[3709]: E0120 13:58:17.105975 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 13:58:17.106075 kubelet[3709]: E0120 13:58:17.106026 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 13:58:17.106000 audit: BPF prog-id=207 op=LOAD Jan 20 13:58:17.106000 audit[4986]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd5c82e18 a2=98 a3=ffffd5c82e08 items=0 ppid=4905 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.106000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 13:58:17.106000 audit: BPF prog-id=207 op=UNLOAD Jan 20 13:58:17.106000 audit[4986]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd5c82de8 a3=0 items=0 ppid=4905 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.106000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 13:58:17.106000 audit: BPF prog-id=208 op=LOAD Jan 20 13:58:17.106000 audit[4986]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd5c82aa8 a2=74 a3=95 items=0 ppid=4905 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.106000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 13:58:17.107000 audit: BPF prog-id=208 op=UNLOAD Jan 20 13:58:17.107000 audit[4986]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4905 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.107000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 13:58:17.107000 audit: BPF prog-id=209 op=LOAD Jan 20 13:58:17.107000 audit[4986]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd5c82b08 a2=94 a3=2 items=0 ppid=4905 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.107000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 13:58:17.107000 audit: BPF prog-id=209 op=UNLOAD Jan 20 13:58:17.107000 audit[4986]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4905 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.107000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 13:58:17.113770 kubelet[3709]: E0120 13:58:17.113715 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a3d21d502bf249319790de3c1cf28ae0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q6tsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77799d95d6-2jfd5_calico-system(f1bb23d8-4b1f-4aa2-8b63-89adfd023536): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 13:58:17.116100 containerd[2143]: time="2026-01-20T13:58:17.115983787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 13:58:17.186000 audit: BPF prog-id=210 op=LOAD Jan 20 13:58:17.186000 audit[4986]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd5c82ac8 a2=40 a3=ffffd5c82af8 items=0 ppid=4905 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.186000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 13:58:17.186000 audit: BPF prog-id=210 op=UNLOAD Jan 20 13:58:17.186000 audit[4986]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffd5c82af8 items=0 ppid=4905 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.186000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 13:58:17.193000 audit: BPF prog-id=211 op=LOAD Jan 20 13:58:17.193000 audit[4986]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd5c82ad8 a2=94 a3=4 items=0 ppid=4905 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.193000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 13:58:17.193000 audit: BPF prog-id=211 op=UNLOAD Jan 20 13:58:17.193000 audit[4986]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4905 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.193000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 13:58:17.193000 audit: BPF prog-id=212 op=LOAD Jan 20 13:58:17.193000 audit[4986]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd5c82918 a2=94 a3=5 items=0 ppid=4905 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.193000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 13:58:17.193000 audit: BPF prog-id=212 op=UNLOAD Jan 20 13:58:17.193000 audit[4986]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4905 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.193000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 13:58:17.194000 audit: BPF prog-id=213 op=LOAD Jan 20 13:58:17.194000 audit[4986]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd5c82b48 a2=94 a3=6 items=0 ppid=4905 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.194000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 13:58:17.194000 audit: BPF prog-id=213 op=UNLOAD Jan 20 13:58:17.194000 audit[4986]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4905 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.194000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 13:58:17.194000 audit: BPF prog-id=214 op=LOAD Jan 20 13:58:17.194000 audit[4986]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd5c82318 a2=94 a3=83 items=0 ppid=4905 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.194000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 13:58:17.195000 audit: BPF prog-id=215 op=LOAD Jan 20 13:58:17.195000 audit[4986]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffd5c820d8 a2=94 a3=2 items=0 ppid=4905 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.195000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 13:58:17.195000 audit: BPF prog-id=215 op=UNLOAD Jan 20 13:58:17.195000 audit[4986]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4905 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.195000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 13:58:17.195000 audit: BPF prog-id=214 op=UNLOAD Jan 20 13:58:17.195000 audit[4986]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=e33f620 a3=e332b00 items=0 ppid=4905 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.195000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 13:58:17.204000 audit: BPF prog-id=216 op=LOAD Jan 20 13:58:17.204000 audit[4989]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd438e678 a2=98 a3=ffffd438e668 items=0 ppid=4905 pid=4989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.204000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 13:58:17.204000 audit: BPF prog-id=216 op=UNLOAD Jan 20 13:58:17.204000 audit[4989]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd438e648 a3=0 items=0 ppid=4905 pid=4989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.204000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 13:58:17.204000 audit: BPF prog-id=217 op=LOAD Jan 20 13:58:17.204000 audit[4989]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd438e528 a2=74 a3=95 items=0 ppid=4905 pid=4989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.204000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 13:58:17.204000 audit: BPF prog-id=217 op=UNLOAD Jan 20 13:58:17.204000 audit[4989]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4905 pid=4989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.204000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 13:58:17.204000 audit: BPF prog-id=218 op=LOAD Jan 20 13:58:17.204000 audit[4989]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd438e558 a2=40 a3=ffffd438e588 items=0 ppid=4905 pid=4989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.204000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 13:58:17.204000 audit: BPF prog-id=218 op=UNLOAD Jan 20 13:58:17.204000 audit[4989]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffd438e588 items=0 ppid=4905 pid=4989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.204000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 13:58:17.269189 systemd-networkd[1707]: vxlan.calico: Link UP Jan 20 13:58:17.269195 systemd-networkd[1707]: vxlan.calico: Gained carrier Jan 20 13:58:17.284000 audit: BPF prog-id=219 op=LOAD Jan 20 13:58:17.284000 audit[5017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff112a968 a2=98 a3=fffff112a958 items=0 ppid=4905 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.284000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 13:58:17.286000 audit: BPF prog-id=219 op=UNLOAD Jan 20 13:58:17.286000 audit[5017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff112a938 a3=0 items=0 ppid=4905 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.286000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 13:58:17.286000 audit: BPF prog-id=220 op=LOAD Jan 20 13:58:17.286000 audit[5017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff112a648 a2=74 a3=95 items=0 ppid=4905 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.286000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 13:58:17.286000 audit: BPF prog-id=220 op=UNLOAD Jan 20 13:58:17.286000 audit[5017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4905 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.286000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 13:58:17.286000 audit: BPF prog-id=221 op=LOAD Jan 20 13:58:17.286000 audit[5017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff112a6a8 a2=94 a3=2 items=0 ppid=4905 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.286000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 13:58:17.286000 audit: BPF prog-id=221 op=UNLOAD Jan 20 13:58:17.286000 audit[5017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4905 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.286000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 13:58:17.286000 audit: BPF prog-id=222 op=LOAD Jan 20 13:58:17.286000 audit[5017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff112a528 a2=40 a3=fffff112a558 items=0 ppid=4905 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.286000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 13:58:17.286000 audit: BPF prog-id=222 op=UNLOAD Jan 20 13:58:17.286000 audit[5017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=fffff112a558 items=0 ppid=4905 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.286000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 13:58:17.286000 audit: BPF prog-id=223 op=LOAD Jan 20 13:58:17.286000 audit[5017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff112a678 a2=94 a3=b7 items=0 ppid=4905 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.286000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 13:58:17.286000 audit: BPF prog-id=223 op=UNLOAD Jan 20 13:58:17.286000 audit[5017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4905 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.286000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 13:58:17.288000 audit: BPF prog-id=224 op=LOAD Jan 20 13:58:17.288000 audit[5017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff1129d28 a2=94 a3=2 items=0 ppid=4905 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.288000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 13:58:17.288000 audit: BPF prog-id=224 op=UNLOAD Jan 20 13:58:17.288000 audit[5017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4905 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.288000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 13:58:17.288000 audit: BPF prog-id=225 op=LOAD Jan 20 13:58:17.288000 audit[5017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff1129eb8 a2=94 a3=30 items=0 ppid=4905 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.288000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 13:58:17.293000 audit: BPF prog-id=226 op=LOAD Jan 20 13:58:17.293000 audit[5021]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff65200d8 a2=98 a3=fffff65200c8 items=0 ppid=4905 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.293000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 13:58:17.294000 audit: BPF prog-id=226 op=UNLOAD Jan 20 13:58:17.294000 audit[5021]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff65200a8 a3=0 items=0 ppid=4905 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.294000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 13:58:17.294000 audit: BPF prog-id=227 op=LOAD Jan 20 13:58:17.294000 audit[5021]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff651fd68 a2=74 a3=95 items=0 ppid=4905 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.294000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 13:58:17.295000 audit: BPF prog-id=227 op=UNLOAD Jan 20 13:58:17.295000 audit[5021]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4905 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.295000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 13:58:17.295000 audit: BPF prog-id=228 op=LOAD Jan 20 13:58:17.295000 audit[5021]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff651fdc8 a2=94 a3=2 items=0 ppid=4905 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.295000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 13:58:17.295000 audit: BPF prog-id=228 op=UNLOAD Jan 20 13:58:17.295000 audit[5021]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4905 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.295000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 13:58:17.375000 audit: BPF prog-id=229 op=LOAD Jan 20 13:58:17.375000 audit[5021]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff651fd88 a2=40 a3=fffff651fdb8 items=0 ppid=4905 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.375000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 13:58:17.376000 audit: BPF prog-id=229 op=UNLOAD Jan 20 13:58:17.376000 audit[5021]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffff651fdb8 items=0 ppid=4905 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.376000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 13:58:17.383000 audit: BPF prog-id=230 op=LOAD Jan 20 13:58:17.383000 audit[5021]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff651fd98 a2=94 a3=4 items=0 ppid=4905 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.383000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 13:58:17.383000 audit: BPF prog-id=230 op=UNLOAD Jan 20 13:58:17.383000 audit[5021]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4905 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.383000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 13:58:17.383000 audit: BPF prog-id=231 op=LOAD Jan 20 13:58:17.383000 audit[5021]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff651fbd8 a2=94 a3=5 items=0 ppid=4905 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.383000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 13:58:17.383000 audit: BPF prog-id=231 op=UNLOAD Jan 20 13:58:17.383000 audit[5021]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4905 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.383000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 13:58:17.383000 audit: BPF prog-id=232 op=LOAD Jan 20 13:58:17.383000 audit[5021]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff651fe08 a2=94 a3=6 items=0 ppid=4905 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.383000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 13:58:17.383000 audit: BPF prog-id=232 op=UNLOAD Jan 20 13:58:17.383000 audit[5021]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4905 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.383000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 13:58:17.384000 audit: BPF prog-id=233 op=LOAD Jan 20 13:58:17.384000 audit[5021]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff651f5d8 a2=94 a3=83 items=0 ppid=4905 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.384000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 13:58:17.384000 audit: BPF prog-id=234 op=LOAD Jan 20 13:58:17.384000 audit[5021]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffff651f398 a2=94 a3=2 items=0 ppid=4905 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.384000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 13:58:17.384000 audit: BPF prog-id=234 op=UNLOAD Jan 20 13:58:17.384000 audit[5021]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4905 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.384000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 13:58:17.384000 audit: BPF prog-id=233 op=UNLOAD Jan 20 13:58:17.384000 audit[5021]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=2003b620 a3=2002eb00 items=0 ppid=4905 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.384000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 13:58:17.392242 containerd[2143]: time="2026-01-20T13:58:17.392189203Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:58:17.392000 audit: BPF prog-id=225 op=UNLOAD Jan 20 13:58:17.392000 audit[4905]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000e3c140 a2=0 a3=0 items=0 ppid=4881 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.392000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 20 13:58:17.561974 containerd[2143]: time="2026-01-20T13:58:17.561754267Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 13:58:17.561974 containerd[2143]: time="2026-01-20T13:58:17.561801084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 13:58:17.562983 kubelet[3709]: E0120 13:58:17.562101 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 13:58:17.562983 kubelet[3709]: E0120 13:58:17.563071 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 13:58:17.563878 kubelet[3709]: E0120 13:58:17.563189 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6tsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77799d95d6-2jfd5_calico-system(f1bb23d8-4b1f-4aa2-8b63-89adfd023536): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 13:58:17.564713 kubelet[3709]: E0120 13:58:17.564670 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77799d95d6-2jfd5" podUID="f1bb23d8-4b1f-4aa2-8b63-89adfd023536" Jan 20 13:58:17.608000 audit[5044]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=5044 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 13:58:17.608000 audit[5044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffe18dac70 a2=0 a3=ffff87b12fa8 items=0 ppid=4905 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.608000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 13:58:17.615000 audit[5043]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=5043 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 13:58:17.615000 audit[5043]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffd4c807f0 a2=0 a3=ffffbe25bfa8 items=0 ppid=4905 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.615000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 13:58:17.617000 audit[5046]: NETFILTER_CFG table=mangle:124 family=2 entries=16 op=nft_register_chain pid=5046 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 13:58:17.617000 audit[5046]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffcded3790 a2=0 a3=ffff8a9cbfa8 items=0 ppid=4905 pid=5046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.617000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 13:58:17.618000 audit[5045]: NETFILTER_CFG table=filter:125 family=2 entries=94 op=nft_register_chain pid=5045 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 13:58:17.618000 audit[5045]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffc1af5780 a2=0 a3=ffffb451ffa8 items=0 ppid=4905 pid=5045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:17.618000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 13:58:17.833498 kubelet[3709]: I0120 13:58:17.833286 3709 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32c716f6-e8c9-410b-8d5d-21bab2f36497" path="/var/lib/kubelet/pods/32c716f6-e8c9-410b-8d5d-21bab2f36497/volumes" Jan 20 13:58:17.979998 kubelet[3709]: E0120 13:58:17.979911 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77799d95d6-2jfd5" podUID="f1bb23d8-4b1f-4aa2-8b63-89adfd023536" Jan 20 13:58:18.006000 audit[5060]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=5060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:58:18.006000 audit[5060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffede8f1e0 a2=0 a3=1 items=0 ppid=3859 pid=5060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:18.006000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:58:18.012000 audit[5060]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=5060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:58:18.012000 audit[5060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffede8f1e0 a2=0 a3=1 items=0 ppid=3859 pid=5060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:18.012000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:58:18.307534 systemd-networkd[1707]: vxlan.calico: Gained IPv6LL Jan 20 13:58:18.499771 systemd-networkd[1707]: cali22c1d4483f3: Gained IPv6LL Jan 20 13:58:19.832218 containerd[2143]: time="2026-01-20T13:58:19.832162407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vhqxx,Uid:23535579-9237-4da3-a34f-0ccbbd6a2ee0,Namespace:calico-system,Attempt:0,}" Jan 20 13:58:19.832597 containerd[2143]: time="2026-01-20T13:58:19.832375589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d86775db-xr92p,Uid:d927ba29-b3e9-4c16-90f8-d925c765f0ce,Namespace:calico-system,Attempt:0,}" Jan 20 13:58:19.948528 systemd-networkd[1707]: caliac8d90bad38: Link UP Jan 20 13:58:19.948641 systemd-networkd[1707]: caliac8d90bad38: Gained carrier Jan 20 13:58:19.963134 containerd[2143]: 2026-01-20 13:58:19.886 [INFO][5067] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.1.1--f--6b32856eb5-k8s-calico--kube--controllers--68d86775db--xr92p-eth0 calico-kube-controllers-68d86775db- calico-system d927ba29-b3e9-4c16-90f8-d925c765f0ce 792 0 2026-01-20 13:57:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68d86775db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-9999.1.1-f-6b32856eb5 calico-kube-controllers-68d86775db-xr92p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliac8d90bad38 [] [] }} ContainerID="1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09" Namespace="calico-system" Pod="calico-kube-controllers-68d86775db-xr92p" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--kube--controllers--68d86775db--xr92p-" Jan 20 13:58:19.963134 containerd[2143]: 2026-01-20 13:58:19.887 [INFO][5067] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09" Namespace="calico-system" Pod="calico-kube-controllers-68d86775db-xr92p" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--kube--controllers--68d86775db--xr92p-eth0" Jan 20 13:58:19.963134 containerd[2143]: 2026-01-20 13:58:19.913 [INFO][5089] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09" HandleID="k8s-pod-network.1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09" Workload="ci--9999.1.1--f--6b32856eb5-k8s-calico--kube--controllers--68d86775db--xr92p-eth0" Jan 20 13:58:19.963315 containerd[2143]: 2026-01-20 13:58:19.913 [INFO][5089] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09" HandleID="k8s-pod-network.1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09" Workload="ci--9999.1.1--f--6b32856eb5-k8s-calico--kube--controllers--68d86775db--xr92p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b230), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999.1.1-f-6b32856eb5", "pod":"calico-kube-controllers-68d86775db-xr92p", "timestamp":"2026-01-20 13:58:19.91355795 +0000 UTC"}, Hostname:"ci-9999.1.1-f-6b32856eb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 13:58:19.963315 containerd[2143]: 2026-01-20 13:58:19.913 [INFO][5089] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 13:58:19.963315 containerd[2143]: 2026-01-20 13:58:19.913 [INFO][5089] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 13:58:19.963315 containerd[2143]: 2026-01-20 13:58:19.914 [INFO][5089] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.1.1-f-6b32856eb5' Jan 20 13:58:19.963315 containerd[2143]: 2026-01-20 13:58:19.919 [INFO][5089] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:19.963315 containerd[2143]: 2026-01-20 13:58:19.923 [INFO][5089] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:19.963315 containerd[2143]: 2026-01-20 13:58:19.926 [INFO][5089] ipam/ipam.go 511: Trying affinity for 192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:19.963315 containerd[2143]: 2026-01-20 13:58:19.927 [INFO][5089] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:19.963315 containerd[2143]: 2026-01-20 13:58:19.928 [INFO][5089] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:19.963487 containerd[2143]: 2026-01-20 13:58:19.929 [INFO][5089] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:19.963487 containerd[2143]: 2026-01-20 13:58:19.930 [INFO][5089] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09 Jan 20 13:58:19.963487 containerd[2143]: 2026-01-20 13:58:19.933 [INFO][5089] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:19.963487 containerd[2143]: 2026-01-20 13:58:19.943 [INFO][5089] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.66/26] block=192.168.36.64/26 handle="k8s-pod-network.1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:19.963487 containerd[2143]: 2026-01-20 13:58:19.943 [INFO][5089] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.66/26] handle="k8s-pod-network.1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:19.963487 containerd[2143]: 2026-01-20 13:58:19.943 [INFO][5089] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 13:58:19.963487 containerd[2143]: 2026-01-20 13:58:19.943 [INFO][5089] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.66/26] IPv6=[] ContainerID="1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09" HandleID="k8s-pod-network.1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09" Workload="ci--9999.1.1--f--6b32856eb5-k8s-calico--kube--controllers--68d86775db--xr92p-eth0" Jan 20 13:58:19.963579 containerd[2143]: 2026-01-20 13:58:19.945 [INFO][5067] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09" Namespace="calico-system" Pod="calico-kube-controllers-68d86775db-xr92p" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--kube--controllers--68d86775db--xr92p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.1.1--f--6b32856eb5-k8s-calico--kube--controllers--68d86775db--xr92p-eth0", GenerateName:"calico-kube-controllers-68d86775db-", Namespace:"calico-system", SelfLink:"", UID:"d927ba29-b3e9-4c16-90f8-d925c765f0ce", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 13, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68d86775db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.1.1-f-6b32856eb5", ContainerID:"", Pod:"calico-kube-controllers-68d86775db-xr92p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliac8d90bad38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 13:58:19.963616 containerd[2143]: 2026-01-20 13:58:19.945 [INFO][5067] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.66/32] ContainerID="1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09" Namespace="calico-system" Pod="calico-kube-controllers-68d86775db-xr92p" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--kube--controllers--68d86775db--xr92p-eth0" Jan 20 13:58:19.963616 containerd[2143]: 2026-01-20 13:58:19.945 [INFO][5067] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac8d90bad38 ContainerID="1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09" Namespace="calico-system" Pod="calico-kube-controllers-68d86775db-xr92p" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--kube--controllers--68d86775db--xr92p-eth0" Jan 20 13:58:19.963616 containerd[2143]: 2026-01-20 13:58:19.947 [INFO][5067] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09" Namespace="calico-system" Pod="calico-kube-controllers-68d86775db-xr92p" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--kube--controllers--68d86775db--xr92p-eth0" Jan 20 13:58:19.963659 containerd[2143]: 2026-01-20 13:58:19.948 [INFO][5067] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09" Namespace="calico-system" Pod="calico-kube-controllers-68d86775db-xr92p" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--kube--controllers--68d86775db--xr92p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.1.1--f--6b32856eb5-k8s-calico--kube--controllers--68d86775db--xr92p-eth0", GenerateName:"calico-kube-controllers-68d86775db-", Namespace:"calico-system", SelfLink:"", UID:"d927ba29-b3e9-4c16-90f8-d925c765f0ce", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 13, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68d86775db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.1.1-f-6b32856eb5", ContainerID:"1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09", Pod:"calico-kube-controllers-68d86775db-xr92p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliac8d90bad38", MAC:"ae:55:56:5d:31:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 13:58:19.963693 containerd[2143]: 2026-01-20 13:58:19.959 [INFO][5067] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09" Namespace="calico-system" Pod="calico-kube-controllers-68d86775db-xr92p" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--kube--controllers--68d86775db--xr92p-eth0" Jan 20 13:58:19.972000 audit[5112]: NETFILTER_CFG table=filter:128 family=2 entries=36 op=nft_register_chain pid=5112 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 13:58:19.972000 audit[5112]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=ffffd2ce8e60 a2=0 a3=ffffaebdafa8 items=0 ppid=4905 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:19.972000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 13:58:20.007539 containerd[2143]: time="2026-01-20T13:58:20.007502969Z" level=info msg="connecting to shim 1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09" address="unix:///run/containerd/s/a8344fa3784eb6ab616a0f6e1217d4ab0f298dbc0c329d068df407548da93350" namespace=k8s.io protocol=ttrpc version=3 Jan 20 13:58:20.030788 systemd[1]: Started cri-containerd-1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09.scope - libcontainer container 1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09. Jan 20 13:58:20.043000 audit: BPF prog-id=235 op=LOAD Jan 20 13:58:20.043000 audit: BPF prog-id=236 op=LOAD Jan 20 13:58:20.043000 audit[5133]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5122 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:20.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165323135363966653062656433643835653565363636306135376638 Jan 20 13:58:20.044000 audit: BPF prog-id=236 op=UNLOAD Jan 20 13:58:20.044000 audit[5133]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5122 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:20.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165323135363966653062656433643835653565363636306135376638 Jan 20 13:58:20.044000 audit: BPF prog-id=237 op=LOAD Jan 20 13:58:20.044000 audit[5133]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5122 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:20.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165323135363966653062656433643835653565363636306135376638 Jan 20 13:58:20.044000 audit: BPF prog-id=238 op=LOAD Jan 20 13:58:20.044000 audit[5133]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5122 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:20.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165323135363966653062656433643835653565363636306135376638 Jan 20 13:58:20.044000 audit: BPF prog-id=238 op=UNLOAD Jan 20 13:58:20.044000 audit[5133]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5122 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:20.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165323135363966653062656433643835653565363636306135376638 Jan 20 13:58:20.044000 audit: BPF prog-id=237 op=UNLOAD Jan 20 13:58:20.044000 audit[5133]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5122 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:20.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165323135363966653062656433643835653565363636306135376638 Jan 20 13:58:20.044000 audit: BPF prog-id=239 op=LOAD Jan 20 13:58:20.044000 audit[5133]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5122 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:20.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165323135363966653062656433643835653565363636306135376638 Jan 20 13:58:20.066378 systemd-networkd[1707]: calic1fbfe50d18: Link UP Jan 20 13:58:20.066567 systemd-networkd[1707]: calic1fbfe50d18: Gained carrier Jan 20 13:58:20.083684 containerd[2143]: time="2026-01-20T13:58:20.082526647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d86775db-xr92p,Uid:d927ba29-b3e9-4c16-90f8-d925c765f0ce,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e21569fe0bed3d85e5e6660a57f86dcce6e76c78885aea2f09b9f1cd5c5fb09\"" Jan 20 13:58:20.085803 containerd[2143]: time="2026-01-20T13:58:20.085363389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 13:58:20.085803 containerd[2143]: 2026-01-20 13:58:19.887 [INFO][5063] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.1.1--f--6b32856eb5-k8s-csi--node--driver--vhqxx-eth0 csi-node-driver- calico-system 23535579-9237-4da3-a34f-0ccbbd6a2ee0 727 0 2026-01-20 13:57:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-9999.1.1-f-6b32856eb5 csi-node-driver-vhqxx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic1fbfe50d18 [] [] }} ContainerID="5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8" Namespace="calico-system" Pod="csi-node-driver-vhqxx" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-csi--node--driver--vhqxx-" Jan 20 13:58:20.085803 containerd[2143]: 2026-01-20 13:58:19.887 [INFO][5063] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8" Namespace="calico-system" Pod="csi-node-driver-vhqxx" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-csi--node--driver--vhqxx-eth0" Jan 20 13:58:20.085803 containerd[2143]: 2026-01-20 13:58:19.914 [INFO][5094] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8" HandleID="k8s-pod-network.5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8" Workload="ci--9999.1.1--f--6b32856eb5-k8s-csi--node--driver--vhqxx-eth0" Jan 20 13:58:20.085952 containerd[2143]: 2026-01-20 13:58:19.914 [INFO][5094] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8" HandleID="k8s-pod-network.5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8" Workload="ci--9999.1.1--f--6b32856eb5-k8s-csi--node--driver--vhqxx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999.1.1-f-6b32856eb5", "pod":"csi-node-driver-vhqxx", "timestamp":"2026-01-20 13:58:19.914862757 +0000 UTC"}, Hostname:"ci-9999.1.1-f-6b32856eb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 13:58:20.085952 containerd[2143]: 2026-01-20 13:58:19.915 [INFO][5094] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 13:58:20.085952 containerd[2143]: 2026-01-20 13:58:19.943 [INFO][5094] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 13:58:20.085952 containerd[2143]: 2026-01-20 13:58:19.943 [INFO][5094] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.1.1-f-6b32856eb5' Jan 20 13:58:20.085952 containerd[2143]: 2026-01-20 13:58:20.021 [INFO][5094] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:20.085952 containerd[2143]: 2026-01-20 13:58:20.027 [INFO][5094] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:20.085952 containerd[2143]: 2026-01-20 13:58:20.033 [INFO][5094] ipam/ipam.go 511: Trying affinity for 192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:20.085952 containerd[2143]: 2026-01-20 13:58:20.036 [INFO][5094] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:20.085952 containerd[2143]: 2026-01-20 13:58:20.039 [INFO][5094] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:20.086109 containerd[2143]: 2026-01-20 13:58:20.039 [INFO][5094] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:20.086109 containerd[2143]: 2026-01-20 13:58:20.041 [INFO][5094] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8 Jan 20 13:58:20.086109 containerd[2143]: 2026-01-20 13:58:20.046 [INFO][5094] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:20.086109 containerd[2143]: 2026-01-20 13:58:20.055 [INFO][5094] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.67/26] block=192.168.36.64/26 handle="k8s-pod-network.5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:20.086109 containerd[2143]: 2026-01-20 13:58:20.055 [INFO][5094] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.67/26] handle="k8s-pod-network.5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:20.086109 containerd[2143]: 2026-01-20 13:58:20.056 [INFO][5094] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 13:58:20.086109 containerd[2143]: 2026-01-20 13:58:20.056 [INFO][5094] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.67/26] IPv6=[] ContainerID="5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8" HandleID="k8s-pod-network.5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8" Workload="ci--9999.1.1--f--6b32856eb5-k8s-csi--node--driver--vhqxx-eth0" Jan 20 13:58:20.086209 containerd[2143]: 2026-01-20 13:58:20.058 [INFO][5063] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8" Namespace="calico-system" Pod="csi-node-driver-vhqxx" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-csi--node--driver--vhqxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.1.1--f--6b32856eb5-k8s-csi--node--driver--vhqxx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"23535579-9237-4da3-a34f-0ccbbd6a2ee0", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 13, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.1.1-f-6b32856eb5", ContainerID:"", Pod:"csi-node-driver-vhqxx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic1fbfe50d18", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 13:58:20.086945 containerd[2143]: 2026-01-20 13:58:20.059 [INFO][5063] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.67/32] ContainerID="5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8" Namespace="calico-system" Pod="csi-node-driver-vhqxx" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-csi--node--driver--vhqxx-eth0" Jan 20 13:58:20.086945 containerd[2143]: 2026-01-20 13:58:20.059 [INFO][5063] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1fbfe50d18 ContainerID="5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8" Namespace="calico-system" Pod="csi-node-driver-vhqxx" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-csi--node--driver--vhqxx-eth0" Jan 20 13:58:20.086945 containerd[2143]: 2026-01-20 13:58:20.066 [INFO][5063] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8" Namespace="calico-system" Pod="csi-node-driver-vhqxx" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-csi--node--driver--vhqxx-eth0" Jan 20 13:58:20.087013 containerd[2143]: 2026-01-20 13:58:20.067 [INFO][5063] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8" Namespace="calico-system" Pod="csi-node-driver-vhqxx" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-csi--node--driver--vhqxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.1.1--f--6b32856eb5-k8s-csi--node--driver--vhqxx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"23535579-9237-4da3-a34f-0ccbbd6a2ee0", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 13, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.1.1-f-6b32856eb5", ContainerID:"5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8", Pod:"csi-node-driver-vhqxx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic1fbfe50d18", MAC:"5e:95:a3:4f:76:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 13:58:20.087062 containerd[2143]: 2026-01-20 13:58:20.081 [INFO][5063] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8" Namespace="calico-system" Pod="csi-node-driver-vhqxx" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-csi--node--driver--vhqxx-eth0" Jan 20 13:58:20.098000 audit[5165]: NETFILTER_CFG table=filter:129 family=2 entries=40 op=nft_register_chain pid=5165 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 13:58:20.098000 audit[5165]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20764 a0=3 a1=fffff58001f0 a2=0 a3=ffff93de6fa8 items=0 ppid=4905 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:20.098000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 13:58:20.129319 containerd[2143]: time="2026-01-20T13:58:20.128796947Z" level=info msg="connecting to shim 5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8" address="unix:///run/containerd/s/cdfc612fe14f387f5d6c2b03c44925da82d56f91288ff1dc9d5900c70cf3a601" namespace=k8s.io protocol=ttrpc version=3 Jan 20 13:58:20.152560 systemd[1]: Started cri-containerd-5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8.scope - libcontainer container 5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8. Jan 20 13:58:20.158000 audit: BPF prog-id=240 op=LOAD Jan 20 13:58:20.159000 audit: BPF prog-id=241 op=LOAD Jan 20 13:58:20.159000 audit[5185]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=5174 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:20.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563306332353139346635613763383766346465313964326130313137 Jan 20 13:58:20.159000 audit: BPF prog-id=241 op=UNLOAD Jan 20 13:58:20.159000 audit[5185]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5174 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:20.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563306332353139346635613763383766346465313964326130313137 Jan 20 13:58:20.159000 audit: BPF prog-id=242 op=LOAD Jan 20 13:58:20.159000 audit[5185]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=5174 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:20.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563306332353139346635613763383766346465313964326130313137 Jan 20 13:58:20.159000 audit: BPF prog-id=243 op=LOAD Jan 20 13:58:20.159000 audit[5185]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=5174 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:20.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563306332353139346635613763383766346465313964326130313137 Jan 20 13:58:20.159000 audit: BPF prog-id=243 op=UNLOAD Jan 20 13:58:20.159000 audit[5185]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5174 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:20.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563306332353139346635613763383766346465313964326130313137 Jan 20 13:58:20.159000 audit: BPF prog-id=242 op=UNLOAD Jan 20 13:58:20.159000 audit[5185]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5174 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:20.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563306332353139346635613763383766346465313964326130313137 Jan 20 13:58:20.159000 audit: BPF prog-id=244 op=LOAD Jan 20 13:58:20.159000 audit[5185]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=5174 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:20.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563306332353139346635613763383766346465313964326130313137 Jan 20 13:58:20.179975 containerd[2143]: time="2026-01-20T13:58:20.179941692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vhqxx,Uid:23535579-9237-4da3-a34f-0ccbbd6a2ee0,Namespace:calico-system,Attempt:0,} returns sandbox id \"5c0c25194f5a7c87f4de19d2a0117cd87792f0c1dc0e318f26d42f6892d476f8\"" Jan 20 13:58:20.339672 containerd[2143]: time="2026-01-20T13:58:20.339617210Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:58:20.342744 containerd[2143]: time="2026-01-20T13:58:20.342650294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 13:58:20.342744 containerd[2143]: time="2026-01-20T13:58:20.342698120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 13:58:20.342907 kubelet[3709]: E0120 13:58:20.342864 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 13:58:20.343455 kubelet[3709]: E0120 13:58:20.342910 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 13:58:20.343484 containerd[2143]: time="2026-01-20T13:58:20.343170550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 13:58:20.344932 kubelet[3709]: E0120 13:58:20.344849 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wj6nr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68d86775db-xr92p_calico-system(d927ba29-b3e9-4c16-90f8-d925c765f0ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 13:58:20.346151 kubelet[3709]: E0120 13:58:20.346119 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68d86775db-xr92p" podUID="d927ba29-b3e9-4c16-90f8-d925c765f0ce" Jan 20 13:58:20.591660 containerd[2143]: time="2026-01-20T13:58:20.591538889Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:58:20.594552 containerd[2143]: time="2026-01-20T13:58:20.594459986Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 13:58:20.594552 containerd[2143]: time="2026-01-20T13:58:20.594505339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 13:58:20.594721 kubelet[3709]: E0120 13:58:20.594676 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 13:58:20.594778 kubelet[3709]: E0120 13:58:20.594730 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 13:58:20.594856 kubelet[3709]: E0120 13:58:20.594824 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-szm87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vhqxx_calico-system(23535579-9237-4da3-a34f-0ccbbd6a2ee0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 13:58:20.596943 containerd[2143]: time="2026-01-20T13:58:20.596917060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 13:58:20.832492 containerd[2143]: time="2026-01-20T13:58:20.832252300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9ztjh,Uid:85bd9e66-9697-4ce6-b203-5bf288af5ea8,Namespace:calico-system,Attempt:0,}" Jan 20 13:58:20.832833 containerd[2143]: time="2026-01-20T13:58:20.832568421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc74856cf-7w8mb,Uid:5c39ce3f-4fc4-445a-adab-faa3675de94e,Namespace:calico-apiserver,Attempt:0,}" Jan 20 13:58:20.954560 systemd-networkd[1707]: cali138baf8a220: Link UP Jan 20 13:58:20.954846 systemd-networkd[1707]: cali138baf8a220: Gained carrier Jan 20 13:58:20.971414 containerd[2143]: 2026-01-20 13:58:20.881 [INFO][5215] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--7w8mb-eth0 calico-apiserver-7dc74856cf- calico-apiserver 5c39ce3f-4fc4-445a-adab-faa3675de94e 804 0 2026-01-20 13:57:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7dc74856cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-9999.1.1-f-6b32856eb5 calico-apiserver-7dc74856cf-7w8mb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali138baf8a220 [] [] }} ContainerID="935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290" Namespace="calico-apiserver" Pod="calico-apiserver-7dc74856cf-7w8mb" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--7w8mb-" Jan 20 13:58:20.971414 containerd[2143]: 2026-01-20 13:58:20.881 [INFO][5215] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290" Namespace="calico-apiserver" Pod="calico-apiserver-7dc74856cf-7w8mb" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--7w8mb-eth0" Jan 20 13:58:20.971414 containerd[2143]: 2026-01-20 13:58:20.911 [INFO][5237] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290" HandleID="k8s-pod-network.935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290" Workload="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--7w8mb-eth0" Jan 20 13:58:20.971660 containerd[2143]: 2026-01-20 13:58:20.911 [INFO][5237] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290" HandleID="k8s-pod-network.935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290" Workload="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--7w8mb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3260), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-9999.1.1-f-6b32856eb5", "pod":"calico-apiserver-7dc74856cf-7w8mb", "timestamp":"2026-01-20 13:58:20.911045868 +0000 UTC"}, Hostname:"ci-9999.1.1-f-6b32856eb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 13:58:20.971660 containerd[2143]: 2026-01-20 13:58:20.911 [INFO][5237] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 13:58:20.971660 containerd[2143]: 2026-01-20 13:58:20.911 [INFO][5237] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 13:58:20.971660 containerd[2143]: 2026-01-20 13:58:20.911 [INFO][5237] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.1.1-f-6b32856eb5' Jan 20 13:58:20.971660 containerd[2143]: 2026-01-20 13:58:20.917 [INFO][5237] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:20.971660 containerd[2143]: 2026-01-20 13:58:20.924 [INFO][5237] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:20.971660 containerd[2143]: 2026-01-20 13:58:20.928 [INFO][5237] ipam/ipam.go 511: Trying affinity for 192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:20.971660 containerd[2143]: 2026-01-20 13:58:20.930 [INFO][5237] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:20.971660 containerd[2143]: 2026-01-20 13:58:20.931 [INFO][5237] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:20.972001 containerd[2143]: 2026-01-20 13:58:20.931 [INFO][5237] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:20.972001 containerd[2143]: 2026-01-20 13:58:20.932 [INFO][5237] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290 Jan 20 13:58:20.972001 containerd[2143]: 2026-01-20 13:58:20.936 [INFO][5237] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:20.972001 containerd[2143]: 2026-01-20 13:58:20.946 [INFO][5237] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.68/26] block=192.168.36.64/26 handle="k8s-pod-network.935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:20.972001 containerd[2143]: 2026-01-20 13:58:20.946 [INFO][5237] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.68/26] handle="k8s-pod-network.935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:20.972001 containerd[2143]: 2026-01-20 13:58:20.946 [INFO][5237] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 13:58:20.972001 containerd[2143]: 2026-01-20 13:58:20.946 [INFO][5237] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.68/26] IPv6=[] ContainerID="935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290" HandleID="k8s-pod-network.935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290" Workload="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--7w8mb-eth0" Jan 20 13:58:20.972333 containerd[2143]: 2026-01-20 13:58:20.950 [INFO][5215] cni-plugin/k8s.go 418: Populated endpoint ContainerID="935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290" Namespace="calico-apiserver" Pod="calico-apiserver-7dc74856cf-7w8mb" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--7w8mb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--7w8mb-eth0", GenerateName:"calico-apiserver-7dc74856cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c39ce3f-4fc4-445a-adab-faa3675de94e", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 13, 57, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc74856cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.1.1-f-6b32856eb5", ContainerID:"", Pod:"calico-apiserver-7dc74856cf-7w8mb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali138baf8a220", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 13:58:20.972583 containerd[2143]: 2026-01-20 13:58:20.950 [INFO][5215] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.68/32] ContainerID="935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290" Namespace="calico-apiserver" Pod="calico-apiserver-7dc74856cf-7w8mb" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--7w8mb-eth0" Jan 20 13:58:20.972583 containerd[2143]: 2026-01-20 13:58:20.950 [INFO][5215] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali138baf8a220 ContainerID="935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290" Namespace="calico-apiserver" Pod="calico-apiserver-7dc74856cf-7w8mb" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--7w8mb-eth0" Jan 20 13:58:20.972583 containerd[2143]: 2026-01-20 13:58:20.952 [INFO][5215] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290" Namespace="calico-apiserver" Pod="calico-apiserver-7dc74856cf-7w8mb" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--7w8mb-eth0" Jan 20 13:58:20.972867 containerd[2143]: 2026-01-20 13:58:20.953 [INFO][5215] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290" Namespace="calico-apiserver" Pod="calico-apiserver-7dc74856cf-7w8mb" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--7w8mb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--7w8mb-eth0", GenerateName:"calico-apiserver-7dc74856cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c39ce3f-4fc4-445a-adab-faa3675de94e", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 13, 57, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc74856cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.1.1-f-6b32856eb5", ContainerID:"935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290", Pod:"calico-apiserver-7dc74856cf-7w8mb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali138baf8a220", MAC:"6e:a9:3b:83:5e:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 13:58:20.972915 containerd[2143]: 2026-01-20 13:58:20.968 [INFO][5215] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290" Namespace="calico-apiserver" Pod="calico-apiserver-7dc74856cf-7w8mb" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--7w8mb-eth0" Jan 20 13:58:20.982000 audit[5259]: NETFILTER_CFG table=filter:130 family=2 entries=58 op=nft_register_chain pid=5259 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 13:58:20.987271 kubelet[3709]: E0120 13:58:20.987240 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68d86775db-xr92p" podUID="d927ba29-b3e9-4c16-90f8-d925c765f0ce" Jan 20 13:58:20.988000 kernel: kauditd_printk_skb: 287 callbacks suppressed Jan 20 13:58:20.988113 kernel: audit: type=1325 audit(1768917500.982:707): table=filter:130 family=2 entries=58 op=nft_register_chain pid=5259 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 13:58:20.982000 audit[5259]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30584 a0=3 a1=fffffec35390 a2=0 a3=ffff7f8c2fa8 items=0 ppid=4905 pid=5259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:21.016405 kernel: audit: type=1300 audit(1768917500.982:707): arch=c00000b7 syscall=211 success=yes exit=30584 a0=3 a1=fffffec35390 a2=0 a3=ffff7f8c2fa8 items=0 ppid=4905 pid=5259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:21.021396 containerd[2143]: time="2026-01-20T13:58:21.020726621Z" level=info msg="connecting to shim 935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290" address="unix:///run/containerd/s/3577c39682d57357d713940e09e8d5b8dcd9d06a70c97f54598f96eea1d03d36" namespace=k8s.io protocol=ttrpc version=3 Jan 20 13:58:20.982000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 13:58:21.034639 kernel: audit: type=1327 audit(1768917500.982:707): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 13:58:21.050753 systemd[1]: Started cri-containerd-935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290.scope - libcontainer container 935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290. Jan 20 13:58:21.062000 audit: BPF prog-id=245 op=LOAD Jan 20 13:58:21.067000 audit: BPF prog-id=246 op=LOAD Jan 20 13:58:21.074784 kernel: audit: type=1334 audit(1768917501.062:708): prog-id=245 op=LOAD Jan 20 13:58:21.074837 kernel: audit: type=1334 audit(1768917501.067:709): prog-id=246 op=LOAD Jan 20 13:58:21.067000 audit[5279]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=5267 pid=5279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:21.091844 kernel: audit: type=1300 audit(1768917501.067:709): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=5267 pid=5279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:21.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933356239643062346562326430613466333161366261663335633261 Jan 20 13:58:21.099055 systemd-networkd[1707]: cali40d38a55f8c: Link UP Jan 20 13:58:21.100077 systemd-networkd[1707]: cali40d38a55f8c: Gained carrier Jan 20 13:58:21.110464 kernel: audit: type=1327 audit(1768917501.067:709): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933356239643062346562326430613466333161366261663335633261 Jan 20 13:58:21.067000 audit: BPF prog-id=246 op=UNLOAD Jan 20 13:58:21.116615 kernel: audit: type=1334 audit(1768917501.067:710): prog-id=246 op=UNLOAD Jan 20 13:58:21.067000 audit[5279]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5267 pid=5279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:21.134759 kernel: audit: type=1300 audit(1768917501.067:710): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5267 pid=5279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:21.135006 containerd[2143]: time="2026-01-20T13:58:21.134971073Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:58:21.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933356239643062346562326430613466333161366261663335633261 Jan 20 13:58:21.140627 containerd[2143]: time="2026-01-20T13:58:21.139617302Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 13:58:21.140627 containerd[2143]: time="2026-01-20T13:58:21.139708544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 13:58:21.141748 kubelet[3709]: E0120 13:58:21.141219 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 13:58:21.141748 kubelet[3709]: E0120 13:58:21.141361 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 13:58:21.142177 kubelet[3709]: E0120 13:58:21.141976 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-szm87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vhqxx_calico-system(23535579-9237-4da3-a34f-0ccbbd6a2ee0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 13:58:21.146329 kubelet[3709]: E0120 13:58:21.145955 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vhqxx" podUID="23535579-9237-4da3-a34f-0ccbbd6a2ee0" Jan 20 13:58:21.155325 kernel: audit: type=1327 audit(1768917501.067:710): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933356239643062346562326430613466333161366261663335633261 Jan 20 13:58:21.161063 containerd[2143]: 2026-01-20 13:58:20.889 [INFO][5211] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.1.1--f--6b32856eb5-k8s-goldmane--666569f655--9ztjh-eth0 goldmane-666569f655- calico-system 85bd9e66-9697-4ce6-b203-5bf288af5ea8 799 0 2026-01-20 13:57:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-9999.1.1-f-6b32856eb5 goldmane-666569f655-9ztjh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali40d38a55f8c [] [] }} ContainerID="32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6" Namespace="calico-system" Pod="goldmane-666569f655-9ztjh" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-goldmane--666569f655--9ztjh-" Jan 20 13:58:21.161063 containerd[2143]: 2026-01-20 13:58:20.890 [INFO][5211] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6" Namespace="calico-system" Pod="goldmane-666569f655-9ztjh" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-goldmane--666569f655--9ztjh-eth0" Jan 20 13:58:21.161063 containerd[2143]: 2026-01-20 13:58:20.914 [INFO][5242] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6" HandleID="k8s-pod-network.32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6" Workload="ci--9999.1.1--f--6b32856eb5-k8s-goldmane--666569f655--9ztjh-eth0" Jan 20 13:58:21.161199 containerd[2143]: 2026-01-20 13:58:20.914 [INFO][5242] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6" HandleID="k8s-pod-network.32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6" Workload="ci--9999.1.1--f--6b32856eb5-k8s-goldmane--666569f655--9ztjh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3880), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999.1.1-f-6b32856eb5", "pod":"goldmane-666569f655-9ztjh", "timestamp":"2026-01-20 13:58:20.914832263 +0000 UTC"}, Hostname:"ci-9999.1.1-f-6b32856eb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 13:58:21.161199 containerd[2143]: 2026-01-20 13:58:20.914 [INFO][5242] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 13:58:21.161199 containerd[2143]: 2026-01-20 13:58:20.946 [INFO][5242] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 13:58:21.161199 containerd[2143]: 2026-01-20 13:58:20.946 [INFO][5242] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.1.1-f-6b32856eb5' Jan 20 13:58:21.161199 containerd[2143]: 2026-01-20 13:58:21.023 [INFO][5242] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:21.161199 containerd[2143]: 2026-01-20 13:58:21.039 [INFO][5242] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:21.161199 containerd[2143]: 2026-01-20 13:58:21.044 [INFO][5242] ipam/ipam.go 511: Trying affinity for 192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:21.161199 containerd[2143]: 2026-01-20 13:58:21.046 [INFO][5242] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:21.161199 containerd[2143]: 2026-01-20 13:58:21.048 [INFO][5242] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:21.161812 containerd[2143]: 2026-01-20 13:58:21.048 [INFO][5242] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:21.161812 containerd[2143]: 2026-01-20 13:58:21.050 [INFO][5242] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6 Jan 20 13:58:21.161812 containerd[2143]: 2026-01-20 13:58:21.060 [INFO][5242] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:21.161812 containerd[2143]: 2026-01-20 13:58:21.077 [INFO][5242] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.69/26] block=192.168.36.64/26 handle="k8s-pod-network.32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:21.161812 containerd[2143]: 2026-01-20 13:58:21.077 [INFO][5242] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.69/26] handle="k8s-pod-network.32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:21.161812 containerd[2143]: 2026-01-20 13:58:21.077 [INFO][5242] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 13:58:21.161812 containerd[2143]: 2026-01-20 13:58:21.077 [INFO][5242] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.69/26] IPv6=[] ContainerID="32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6" HandleID="k8s-pod-network.32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6" Workload="ci--9999.1.1--f--6b32856eb5-k8s-goldmane--666569f655--9ztjh-eth0" Jan 20 13:58:21.161944 containerd[2143]: 2026-01-20 13:58:21.096 [INFO][5211] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6" Namespace="calico-system" Pod="goldmane-666569f655-9ztjh" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-goldmane--666569f655--9ztjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.1.1--f--6b32856eb5-k8s-goldmane--666569f655--9ztjh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"85bd9e66-9697-4ce6-b203-5bf288af5ea8", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 13, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.1.1-f-6b32856eb5", ContainerID:"", Pod:"goldmane-666569f655-9ztjh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali40d38a55f8c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 13:58:21.162014 containerd[2143]: 2026-01-20 13:58:21.096 [INFO][5211] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.69/32] ContainerID="32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6" Namespace="calico-system" Pod="goldmane-666569f655-9ztjh" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-goldmane--666569f655--9ztjh-eth0" Jan 20 13:58:21.162014 containerd[2143]: 2026-01-20 13:58:21.096 [INFO][5211] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40d38a55f8c ContainerID="32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6" Namespace="calico-system" Pod="goldmane-666569f655-9ztjh" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-goldmane--666569f655--9ztjh-eth0" Jan 20 13:58:21.162014 containerd[2143]: 2026-01-20 13:58:21.111 [INFO][5211] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6" Namespace="calico-system" Pod="goldmane-666569f655-9ztjh" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-goldmane--666569f655--9ztjh-eth0" Jan 20 13:58:21.162111 containerd[2143]: 2026-01-20 13:58:21.115 [INFO][5211] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6" Namespace="calico-system" Pod="goldmane-666569f655-9ztjh" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-goldmane--666569f655--9ztjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.1.1--f--6b32856eb5-k8s-goldmane--666569f655--9ztjh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"85bd9e66-9697-4ce6-b203-5bf288af5ea8", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 13, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.1.1-f-6b32856eb5", ContainerID:"32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6", Pod:"goldmane-666569f655-9ztjh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali40d38a55f8c", MAC:"6a:75:32:be:e6:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 13:58:21.162157 containerd[2143]: 2026-01-20 13:58:21.145 [INFO][5211] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6" Namespace="calico-system" Pod="goldmane-666569f655-9ztjh" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-goldmane--666569f655--9ztjh-eth0" Jan 20 13:58:21.067000 audit: BPF prog-id=247 op=LOAD Jan 20 13:58:21.067000 audit[5279]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=5267 pid=5279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:21.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933356239643062346562326430613466333161366261663335633261 Jan 20 13:58:21.067000 audit: BPF prog-id=248 op=LOAD Jan 20 13:58:21.067000 audit[5279]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=5267 pid=5279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:21.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933356239643062346562326430613466333161366261663335633261 Jan 20 13:58:21.067000 audit: BPF prog-id=248 op=UNLOAD Jan 20 13:58:21.067000 audit[5279]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5267 pid=5279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:21.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933356239643062346562326430613466333161366261663335633261 Jan 20 13:58:21.068000 audit: BPF prog-id=247 op=UNLOAD Jan 20 13:58:21.068000 audit[5279]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5267 pid=5279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:21.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933356239643062346562326430613466333161366261663335633261 Jan 20 13:58:21.068000 audit: BPF prog-id=249 op=LOAD Jan 20 13:58:21.068000 audit[5279]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=5267 pid=5279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:21.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933356239643062346562326430613466333161366261663335633261 Jan 20 13:58:21.171000 audit[5313]: NETFILTER_CFG table=filter:131 family=2 entries=56 op=nft_register_chain pid=5313 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 13:58:21.171000 audit[5313]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28744 a0=3 a1=ffffc7dc0130 a2=0 a3=ffffb1cfdfa8 items=0 ppid=4905 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:21.171000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 13:58:21.178607 containerd[2143]: time="2026-01-20T13:58:21.178574924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc74856cf-7w8mb,Uid:5c39ce3f-4fc4-445a-adab-faa3675de94e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"935b9d0b4eb2d0a4f31a6baf35c2a66ba16d7dc0158483dd5381ebaf92a18290\"" Jan 20 13:58:21.181456 containerd[2143]: time="2026-01-20T13:58:21.180717637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 13:58:21.206027 containerd[2143]: time="2026-01-20T13:58:21.205946435Z" level=info msg="connecting to shim 32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6" address="unix:///run/containerd/s/1c6a941988dd23a57369489c16b950db9d96ded59c150c59e9c61cca17c28cce" namespace=k8s.io protocol=ttrpc version=3 Jan 20 13:58:21.235574 systemd[1]: Started cri-containerd-32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6.scope - libcontainer container 32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6. Jan 20 13:58:21.242000 audit: BPF prog-id=250 op=LOAD Jan 20 13:58:21.242000 audit: BPF prog-id=251 op=LOAD Jan 20 13:58:21.242000 audit[5334]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5322 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:21.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332656636313032356537383530386630633030363061613061333330 Jan 20 13:58:21.243000 audit: BPF prog-id=251 op=UNLOAD Jan 20 13:58:21.243000 audit[5334]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5322 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:21.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332656636313032356537383530386630633030363061613061333330 Jan 20 13:58:21.243000 audit: BPF prog-id=252 op=LOAD Jan 20 13:58:21.243000 audit[5334]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5322 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:21.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332656636313032356537383530386630633030363061613061333330 Jan 20 13:58:21.243000 audit: BPF prog-id=253 op=LOAD Jan 20 13:58:21.243000 audit[5334]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5322 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:21.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332656636313032356537383530386630633030363061613061333330 Jan 20 13:58:21.243000 audit: BPF prog-id=253 op=UNLOAD Jan 20 13:58:21.243000 audit[5334]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5322 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:21.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332656636313032356537383530386630633030363061613061333330 Jan 20 13:58:21.243000 audit: BPF prog-id=252 op=UNLOAD Jan 20 13:58:21.243000 audit[5334]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5322 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:21.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332656636313032356537383530386630633030363061613061333330 Jan 20 13:58:21.243000 audit: BPF prog-id=254 op=LOAD Jan 20 13:58:21.243000 audit[5334]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5322 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:21.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332656636313032356537383530386630633030363061613061333330 Jan 20 13:58:21.268120 containerd[2143]: time="2026-01-20T13:58:21.268084641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9ztjh,Uid:85bd9e66-9697-4ce6-b203-5bf288af5ea8,Namespace:calico-system,Attempt:0,} returns sandbox id \"32ef61025e78508f0c0060aa0a3306df7afd424a2eeb19938d348038a1e0e1e6\"" Jan 20 13:58:21.379555 systemd-networkd[1707]: caliac8d90bad38: Gained IPv6LL Jan 20 13:58:21.444045 containerd[2143]: time="2026-01-20T13:58:21.443992933Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:58:21.447327 containerd[2143]: time="2026-01-20T13:58:21.447287233Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 13:58:21.447463 containerd[2143]: time="2026-01-20T13:58:21.447375140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 13:58:21.449208 kubelet[3709]: E0120 13:58:21.447595 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 13:58:21.449208 kubelet[3709]: E0120 13:58:21.447646 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 13:58:21.449208 kubelet[3709]: E0120 13:58:21.447852 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9rlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7dc74856cf-7w8mb_calico-apiserver(5c39ce3f-4fc4-445a-adab-faa3675de94e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 13:58:21.449802 containerd[2143]: time="2026-01-20T13:58:21.449776853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 13:58:21.449974 kubelet[3709]: E0120 13:58:21.449906 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-7w8mb" podUID="5c39ce3f-4fc4-445a-adab-faa3675de94e" Jan 20 13:58:21.699651 systemd-networkd[1707]: calic1fbfe50d18: Gained IPv6LL Jan 20 13:58:21.720944 containerd[2143]: time="2026-01-20T13:58:21.720903858Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:58:21.724098 containerd[2143]: time="2026-01-20T13:58:21.723997232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 13:58:21.724267 containerd[2143]: time="2026-01-20T13:58:21.724038146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 13:58:21.724379 kubelet[3709]: E0120 13:58:21.724339 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 13:58:21.724459 kubelet[3709]: E0120 13:58:21.724404 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 13:58:21.724556 kubelet[3709]: E0120 13:58:21.724522 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftvn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9ztjh_calico-system(85bd9e66-9697-4ce6-b203-5bf288af5ea8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 13:58:21.725922 kubelet[3709]: E0120 13:58:21.725886 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztjh" podUID="85bd9e66-9697-4ce6-b203-5bf288af5ea8" Jan 20 13:58:21.993023 kubelet[3709]: E0120 13:58:21.992901 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztjh" podUID="85bd9e66-9697-4ce6-b203-5bf288af5ea8" Jan 20 13:58:21.995570 kubelet[3709]: E0120 13:58:21.995482 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vhqxx" podUID="23535579-9237-4da3-a34f-0ccbbd6a2ee0" Jan 20 13:58:21.995570 kubelet[3709]: E0120 13:58:21.995543 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-7w8mb" podUID="5c39ce3f-4fc4-445a-adab-faa3675de94e" Jan 20 13:58:21.996267 kubelet[3709]: E0120 13:58:21.995944 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68d86775db-xr92p" podUID="d927ba29-b3e9-4c16-90f8-d925c765f0ce" Jan 20 13:58:22.019746 systemd-networkd[1707]: cali138baf8a220: Gained IPv6LL Jan 20 13:58:22.027000 audit[5363]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5363 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:58:22.027000 audit[5363]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffdeb57130 a2=0 a3=1 items=0 ppid=3859 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:22.027000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:58:22.032000 audit[5363]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=5363 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:58:22.032000 audit[5363]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffdeb57130 a2=0 a3=1 items=0 ppid=3859 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:22.032000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:58:22.088000 audit[5365]: NETFILTER_CFG table=filter:134 family=2 entries=20 op=nft_register_rule pid=5365 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:58:22.088000 audit[5365]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffce77bc10 a2=0 a3=1 items=0 ppid=3859 pid=5365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:22.088000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:58:22.096000 audit[5365]: NETFILTER_CFG table=nat:135 family=2 entries=14 op=nft_register_rule pid=5365 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:58:22.096000 audit[5365]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffce77bc10 a2=0 a3=1 items=0 ppid=3859 pid=5365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:22.096000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:58:22.287420 kubelet[3709]: I0120 13:58:22.286878 3709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 13:58:22.831691 containerd[2143]: time="2026-01-20T13:58:22.831629534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9pmz2,Uid:c6bc023f-2805-44ee-9145-9306204ad855,Namespace:kube-system,Attempt:0,}" Jan 20 13:58:22.832204 containerd[2143]: time="2026-01-20T13:58:22.832155118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc74856cf-zx59p,Uid:9e279828-5731-43f9-9e3d-019d423f62e9,Namespace:calico-apiserver,Attempt:0,}" Jan 20 13:58:22.995080 kubelet[3709]: E0120 13:58:22.994823 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-7w8mb" podUID="5c39ce3f-4fc4-445a-adab-faa3675de94e" Jan 20 13:58:22.995452 kubelet[3709]: E0120 13:58:22.995224 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztjh" podUID="85bd9e66-9697-4ce6-b203-5bf288af5ea8" Jan 20 13:58:23.107681 systemd-networkd[1707]: cali40d38a55f8c: Gained IPv6LL Jan 20 13:58:23.161496 systemd-networkd[1707]: calic1d11375376: Link UP Jan 20 13:58:23.162151 systemd-networkd[1707]: calic1d11375376: Gained carrier Jan 20 13:58:23.180925 containerd[2143]: 2026-01-20 13:58:23.099 [INFO][5421] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--9pmz2-eth0 coredns-668d6bf9bc- kube-system c6bc023f-2805-44ee-9145-9306204ad855 802 0 2026-01-20 13:57:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-9999.1.1-f-6b32856eb5 coredns-668d6bf9bc-9pmz2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic1d11375376 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a" Namespace="kube-system" Pod="coredns-668d6bf9bc-9pmz2" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--9pmz2-" Jan 20 13:58:23.180925 containerd[2143]: 2026-01-20 13:58:23.100 [INFO][5421] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a" Namespace="kube-system" Pod="coredns-668d6bf9bc-9pmz2" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--9pmz2-eth0" Jan 20 13:58:23.180925 containerd[2143]: 2026-01-20 13:58:23.122 [INFO][5450] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a" HandleID="k8s-pod-network.6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a" Workload="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--9pmz2-eth0" Jan 20 13:58:23.181376 containerd[2143]: 2026-01-20 13:58:23.122 [INFO][5450] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a" HandleID="k8s-pod-network.6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a" Workload="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--9pmz2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b090), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-9999.1.1-f-6b32856eb5", "pod":"coredns-668d6bf9bc-9pmz2", "timestamp":"2026-01-20 13:58:23.122550804 +0000 UTC"}, Hostname:"ci-9999.1.1-f-6b32856eb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 13:58:23.181376 containerd[2143]: 2026-01-20 13:58:23.122 [INFO][5450] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 13:58:23.181376 containerd[2143]: 2026-01-20 13:58:23.123 [INFO][5450] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 13:58:23.181376 containerd[2143]: 2026-01-20 13:58:23.123 [INFO][5450] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.1.1-f-6b32856eb5' Jan 20 13:58:23.181376 containerd[2143]: 2026-01-20 13:58:23.128 [INFO][5450] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.181376 containerd[2143]: 2026-01-20 13:58:23.132 [INFO][5450] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.181376 containerd[2143]: 2026-01-20 13:58:23.135 [INFO][5450] ipam/ipam.go 511: Trying affinity for 192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.181376 containerd[2143]: 2026-01-20 13:58:23.137 [INFO][5450] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.181376 containerd[2143]: 2026-01-20 13:58:23.138 [INFO][5450] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.181870 containerd[2143]: 2026-01-20 13:58:23.138 [INFO][5450] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.181870 containerd[2143]: 2026-01-20 13:58:23.139 [INFO][5450] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a Jan 20 13:58:23.181870 containerd[2143]: 2026-01-20 13:58:23.145 [INFO][5450] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.181870 containerd[2143]: 2026-01-20 13:58:23.153 [INFO][5450] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.70/26] block=192.168.36.64/26 handle="k8s-pod-network.6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.181870 containerd[2143]: 2026-01-20 13:58:23.153 [INFO][5450] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.70/26] handle="k8s-pod-network.6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.181870 containerd[2143]: 2026-01-20 13:58:23.153 [INFO][5450] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 13:58:23.181870 containerd[2143]: 2026-01-20 13:58:23.153 [INFO][5450] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.70/26] IPv6=[] ContainerID="6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a" HandleID="k8s-pod-network.6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a" Workload="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--9pmz2-eth0" Jan 20 13:58:23.182203 containerd[2143]: 2026-01-20 13:58:23.157 [INFO][5421] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a" Namespace="kube-system" Pod="coredns-668d6bf9bc-9pmz2" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--9pmz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--9pmz2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c6bc023f-2805-44ee-9145-9306204ad855", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 13, 57, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.1.1-f-6b32856eb5", ContainerID:"", Pod:"coredns-668d6bf9bc-9pmz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1d11375376", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 13:58:23.182203 containerd[2143]: 2026-01-20 13:58:23.157 [INFO][5421] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.70/32] ContainerID="6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a" Namespace="kube-system" Pod="coredns-668d6bf9bc-9pmz2" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--9pmz2-eth0" Jan 20 13:58:23.182203 containerd[2143]: 2026-01-20 13:58:23.157 [INFO][5421] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1d11375376 ContainerID="6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a" Namespace="kube-system" Pod="coredns-668d6bf9bc-9pmz2" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--9pmz2-eth0" Jan 20 13:58:23.182203 containerd[2143]: 2026-01-20 13:58:23.160 [INFO][5421] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a" Namespace="kube-system" Pod="coredns-668d6bf9bc-9pmz2" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--9pmz2-eth0" Jan 20 13:58:23.182203 containerd[2143]: 2026-01-20 13:58:23.160 [INFO][5421] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a" Namespace="kube-system" Pod="coredns-668d6bf9bc-9pmz2" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--9pmz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--9pmz2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c6bc023f-2805-44ee-9145-9306204ad855", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 13, 57, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.1.1-f-6b32856eb5", ContainerID:"6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a", Pod:"coredns-668d6bf9bc-9pmz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1d11375376", MAC:"36:73:27:a7:ad:97", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 13:58:23.182203 containerd[2143]: 2026-01-20 13:58:23.177 [INFO][5421] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a" Namespace="kube-system" Pod="coredns-668d6bf9bc-9pmz2" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--9pmz2-eth0" Jan 20 13:58:23.192000 audit[5468]: NETFILTER_CFG table=filter:136 family=2 entries=64 op=nft_register_chain pid=5468 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 13:58:23.192000 audit[5468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30156 a0=3 a1=ffffd5837350 a2=0 a3=ffffb4736fa8 items=0 ppid=4905 pid=5468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.192000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 13:58:23.221111 containerd[2143]: time="2026-01-20T13:58:23.221069747Z" level=info msg="connecting to shim 6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a" address="unix:///run/containerd/s/e4720c843018687fd6377194d4df417289189b70339c3b889fdd694c929fe3de" namespace=k8s.io protocol=ttrpc version=3 Jan 20 13:58:23.247626 systemd[1]: Started cri-containerd-6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a.scope - libcontainer container 6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a. Jan 20 13:58:23.257000 audit: BPF prog-id=255 op=LOAD Jan 20 13:58:23.258000 audit: BPF prog-id=256 op=LOAD Jan 20 13:58:23.258000 audit[5489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5478 pid=5489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664343961373139313464623038393135383032383735653664396133 Jan 20 13:58:23.259000 audit: BPF prog-id=256 op=UNLOAD Jan 20 13:58:23.259000 audit[5489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5478 pid=5489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664343961373139313464623038393135383032383735653664396133 Jan 20 13:58:23.259000 audit: BPF prog-id=257 op=LOAD Jan 20 13:58:23.259000 audit[5489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5478 pid=5489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664343961373139313464623038393135383032383735653664396133 Jan 20 13:58:23.260000 audit: BPF prog-id=258 op=LOAD Jan 20 13:58:23.260000 audit[5489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5478 pid=5489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664343961373139313464623038393135383032383735653664396133 Jan 20 13:58:23.260000 audit: BPF prog-id=258 op=UNLOAD Jan 20 13:58:23.260000 audit[5489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5478 pid=5489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664343961373139313464623038393135383032383735653664396133 Jan 20 13:58:23.260000 audit: BPF prog-id=257 op=UNLOAD Jan 20 13:58:23.260000 audit[5489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5478 pid=5489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664343961373139313464623038393135383032383735653664396133 Jan 20 13:58:23.260000 audit: BPF prog-id=259 op=LOAD Jan 20 13:58:23.260000 audit[5489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5478 pid=5489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664343961373139313464623038393135383032383735653664396133 Jan 20 13:58:23.269793 systemd-networkd[1707]: cali7216b1a22ce: Link UP Jan 20 13:58:23.270124 systemd-networkd[1707]: cali7216b1a22ce: Gained carrier Jan 20 13:58:23.290414 containerd[2143]: 2026-01-20 13:58:23.097 [INFO][5432] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--zx59p-eth0 calico-apiserver-7dc74856cf- calico-apiserver 9e279828-5731-43f9-9e3d-019d423f62e9 801 0 2026-01-20 13:57:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7dc74856cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-9999.1.1-f-6b32856eb5 calico-apiserver-7dc74856cf-zx59p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7216b1a22ce [] [] }} ContainerID="cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc74856cf-zx59p" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--zx59p-" Jan 20 13:58:23.290414 containerd[2143]: 2026-01-20 13:58:23.097 [INFO][5432] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc74856cf-zx59p" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--zx59p-eth0" Jan 20 13:58:23.290414 containerd[2143]: 2026-01-20 13:58:23.123 [INFO][5445] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d" HandleID="k8s-pod-network.cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d" Workload="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--zx59p-eth0" Jan 20 13:58:23.290414 containerd[2143]: 2026-01-20 13:58:23.126 [INFO][5445] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d" HandleID="k8s-pod-network.cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d" Workload="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--zx59p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3cd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-9999.1.1-f-6b32856eb5", "pod":"calico-apiserver-7dc74856cf-zx59p", "timestamp":"2026-01-20 13:58:23.123931982 +0000 UTC"}, Hostname:"ci-9999.1.1-f-6b32856eb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 13:58:23.290414 containerd[2143]: 2026-01-20 13:58:23.126 [INFO][5445] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 13:58:23.290414 containerd[2143]: 2026-01-20 13:58:23.153 [INFO][5445] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 13:58:23.290414 containerd[2143]: 2026-01-20 13:58:23.153 [INFO][5445] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.1.1-f-6b32856eb5' Jan 20 13:58:23.290414 containerd[2143]: 2026-01-20 13:58:23.230 [INFO][5445] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.290414 containerd[2143]: 2026-01-20 13:58:23.234 [INFO][5445] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.290414 containerd[2143]: 2026-01-20 13:58:23.240 [INFO][5445] ipam/ipam.go 511: Trying affinity for 192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.290414 containerd[2143]: 2026-01-20 13:58:23.243 [INFO][5445] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.290414 containerd[2143]: 2026-01-20 13:58:23.245 [INFO][5445] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.290414 containerd[2143]: 2026-01-20 13:58:23.245 [INFO][5445] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.290414 containerd[2143]: 2026-01-20 13:58:23.247 [INFO][5445] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d Jan 20 13:58:23.290414 containerd[2143]: 2026-01-20 13:58:23.252 [INFO][5445] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.290414 containerd[2143]: 2026-01-20 13:58:23.264 [INFO][5445] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.71/26] block=192.168.36.64/26 handle="k8s-pod-network.cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.290414 containerd[2143]: 2026-01-20 13:58:23.264 [INFO][5445] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.71/26] handle="k8s-pod-network.cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.290414 containerd[2143]: 2026-01-20 13:58:23.264 [INFO][5445] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 13:58:23.290414 containerd[2143]: 2026-01-20 13:58:23.264 [INFO][5445] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.71/26] IPv6=[] ContainerID="cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d" HandleID="k8s-pod-network.cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d" Workload="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--zx59p-eth0" Jan 20 13:58:23.290812 containerd[2143]: 2026-01-20 13:58:23.266 [INFO][5432] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc74856cf-zx59p" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--zx59p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--zx59p-eth0", GenerateName:"calico-apiserver-7dc74856cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e279828-5731-43f9-9e3d-019d423f62e9", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 13, 57, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc74856cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.1.1-f-6b32856eb5", ContainerID:"", Pod:"calico-apiserver-7dc74856cf-zx59p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7216b1a22ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 13:58:23.290812 containerd[2143]: 2026-01-20 13:58:23.267 [INFO][5432] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.71/32] ContainerID="cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc74856cf-zx59p" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--zx59p-eth0" Jan 20 13:58:23.290812 containerd[2143]: 2026-01-20 13:58:23.267 [INFO][5432] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7216b1a22ce ContainerID="cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc74856cf-zx59p" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--zx59p-eth0" Jan 20 13:58:23.290812 containerd[2143]: 2026-01-20 13:58:23.269 [INFO][5432] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc74856cf-zx59p" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--zx59p-eth0" Jan 20 13:58:23.290812 containerd[2143]: 2026-01-20 13:58:23.270 [INFO][5432] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc74856cf-zx59p" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--zx59p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--zx59p-eth0", GenerateName:"calico-apiserver-7dc74856cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e279828-5731-43f9-9e3d-019d423f62e9", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 13, 57, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc74856cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.1.1-f-6b32856eb5", ContainerID:"cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d", Pod:"calico-apiserver-7dc74856cf-zx59p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7216b1a22ce", MAC:"12:43:86:40:b1:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 13:58:23.290812 containerd[2143]: 2026-01-20 13:58:23.285 [INFO][5432] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc74856cf-zx59p" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-calico--apiserver--7dc74856cf--zx59p-eth0" Jan 20 13:58:23.303156 containerd[2143]: time="2026-01-20T13:58:23.303121405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9pmz2,Uid:c6bc023f-2805-44ee-9145-9306204ad855,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a\"" Jan 20 13:58:23.306423 containerd[2143]: time="2026-01-20T13:58:23.306207835Z" level=info msg="CreateContainer within sandbox \"6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 13:58:23.310000 audit[5521]: NETFILTER_CFG table=filter:137 family=2 entries=53 op=nft_register_chain pid=5521 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 13:58:23.310000 audit[5521]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26624 a0=3 a1=fffff2c7e540 a2=0 a3=ffffbce06fa8 items=0 ppid=4905 pid=5521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.310000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 13:58:23.338548 containerd[2143]: time="2026-01-20T13:58:23.338511632Z" level=info msg="connecting to shim cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d" address="unix:///run/containerd/s/378b4117cf58ac470e7bf8535fdc1c77ddc7d08c47f3da2bc8bdaa5dac53f0bd" namespace=k8s.io protocol=ttrpc version=3 Jan 20 13:58:23.341707 containerd[2143]: time="2026-01-20T13:58:23.341648751Z" level=info msg="Container 2f81fba8a3126498d5f200d2052eb3bb99856ceddd5eb3600d4bb5a0b76e503c: CDI devices from CRI Config.CDIDevices: []" Jan 20 13:58:23.356787 containerd[2143]: time="2026-01-20T13:58:23.356723336Z" level=info msg="CreateContainer within sandbox \"6d49a71914db08915802875e6d9a3ae5435d2c4609aa05c231e535e2105c498a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f81fba8a3126498d5f200d2052eb3bb99856ceddd5eb3600d4bb5a0b76e503c\"" Jan 20 13:58:23.358093 containerd[2143]: time="2026-01-20T13:58:23.358011040Z" level=info msg="StartContainer for \"2f81fba8a3126498d5f200d2052eb3bb99856ceddd5eb3600d4bb5a0b76e503c\"" Jan 20 13:58:23.359989 containerd[2143]: time="2026-01-20T13:58:23.358838617Z" level=info msg="connecting to shim 2f81fba8a3126498d5f200d2052eb3bb99856ceddd5eb3600d4bb5a0b76e503c" address="unix:///run/containerd/s/e4720c843018687fd6377194d4df417289189b70339c3b889fdd694c929fe3de" protocol=ttrpc version=3 Jan 20 13:58:23.359192 systemd[1]: Started cri-containerd-cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d.scope - libcontainer container cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d. Jan 20 13:58:23.367000 audit: BPF prog-id=260 op=LOAD Jan 20 13:58:23.368000 audit: BPF prog-id=261 op=LOAD Jan 20 13:58:23.368000 audit[5544]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5532 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366373438323262663639643330653032343238366165323230386439 Jan 20 13:58:23.368000 audit: BPF prog-id=261 op=UNLOAD Jan 20 13:58:23.368000 audit[5544]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5532 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366373438323262663639643330653032343238366165323230386439 Jan 20 13:58:23.369000 audit: BPF prog-id=262 op=LOAD Jan 20 13:58:23.369000 audit[5544]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5532 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366373438323262663639643330653032343238366165323230386439 Jan 20 13:58:23.369000 audit: BPF prog-id=263 op=LOAD Jan 20 13:58:23.369000 audit[5544]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5532 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366373438323262663639643330653032343238366165323230386439 Jan 20 13:58:23.369000 audit: BPF prog-id=263 op=UNLOAD Jan 20 13:58:23.369000 audit[5544]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5532 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366373438323262663639643330653032343238366165323230386439 Jan 20 13:58:23.369000 audit: BPF prog-id=262 op=UNLOAD Jan 20 13:58:23.369000 audit[5544]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5532 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366373438323262663639643330653032343238366165323230386439 Jan 20 13:58:23.369000 audit: BPF prog-id=264 op=LOAD Jan 20 13:58:23.369000 audit[5544]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5532 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366373438323262663639643330653032343238366165323230386439 Jan 20 13:58:23.382785 systemd[1]: Started cri-containerd-2f81fba8a3126498d5f200d2052eb3bb99856ceddd5eb3600d4bb5a0b76e503c.scope - libcontainer container 2f81fba8a3126498d5f200d2052eb3bb99856ceddd5eb3600d4bb5a0b76e503c. Jan 20 13:58:23.395000 audit: BPF prog-id=265 op=LOAD Jan 20 13:58:23.395000 audit: BPF prog-id=266 op=LOAD Jan 20 13:58:23.395000 audit[5558]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=5478 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266383166626138613331323634393864356632303064323035326562 Jan 20 13:58:23.395000 audit: BPF prog-id=266 op=UNLOAD Jan 20 13:58:23.395000 audit[5558]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5478 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266383166626138613331323634393864356632303064323035326562 Jan 20 13:58:23.395000 audit: BPF prog-id=267 op=LOAD Jan 20 13:58:23.395000 audit[5558]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=5478 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266383166626138613331323634393864356632303064323035326562 Jan 20 13:58:23.395000 audit: BPF prog-id=268 op=LOAD Jan 20 13:58:23.395000 audit[5558]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=5478 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266383166626138613331323634393864356632303064323035326562 Jan 20 13:58:23.395000 audit: BPF prog-id=268 op=UNLOAD Jan 20 13:58:23.395000 audit[5558]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5478 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266383166626138613331323634393864356632303064323035326562 Jan 20 13:58:23.395000 audit: BPF prog-id=267 op=UNLOAD Jan 20 13:58:23.395000 audit[5558]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5478 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266383166626138613331323634393864356632303064323035326562 Jan 20 13:58:23.395000 audit: BPF prog-id=269 op=LOAD Jan 20 13:58:23.395000 audit[5558]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=5478 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266383166626138613331323634393864356632303064323035326562 Jan 20 13:58:23.399742 containerd[2143]: time="2026-01-20T13:58:23.399624367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc74856cf-zx59p,Uid:9e279828-5731-43f9-9e3d-019d423f62e9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cf74822bf69d30e024286ae2208d92d11858343577b62c3401bae28b598a973d\"" Jan 20 13:58:23.403083 containerd[2143]: time="2026-01-20T13:58:23.402970004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 13:58:23.419172 containerd[2143]: time="2026-01-20T13:58:23.419117494Z" level=info msg="StartContainer for \"2f81fba8a3126498d5f200d2052eb3bb99856ceddd5eb3600d4bb5a0b76e503c\" returns successfully" Jan 20 13:58:23.671176 containerd[2143]: time="2026-01-20T13:58:23.670492829Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:58:23.673873 containerd[2143]: time="2026-01-20T13:58:23.673764336Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 13:58:23.673873 containerd[2143]: time="2026-01-20T13:58:23.673792193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 13:58:23.674167 kubelet[3709]: E0120 13:58:23.674121 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 13:58:23.674202 kubelet[3709]: E0120 13:58:23.674169 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 13:58:23.674403 kubelet[3709]: E0120 13:58:23.674276 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7rhtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7dc74856cf-zx59p_calico-apiserver(9e279828-5731-43f9-9e3d-019d423f62e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 13:58:23.675469 kubelet[3709]: E0120 13:58:23.675435 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-zx59p" podUID="9e279828-5731-43f9-9e3d-019d423f62e9" Jan 20 13:58:23.832089 containerd[2143]: time="2026-01-20T13:58:23.832038596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4dfzv,Uid:1660a4b3-622a-43b1-b762-2fd404939ef1,Namespace:kube-system,Attempt:0,}" Jan 20 13:58:23.941036 systemd-networkd[1707]: calif0fa6e150f0: Link UP Jan 20 13:58:23.942723 systemd-networkd[1707]: calif0fa6e150f0: Gained carrier Jan 20 13:58:23.965176 containerd[2143]: 2026-01-20 13:58:23.878 [INFO][5602] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--4dfzv-eth0 coredns-668d6bf9bc- kube-system 1660a4b3-622a-43b1-b762-2fd404939ef1 796 0 2026-01-20 13:57:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-9999.1.1-f-6b32856eb5 coredns-668d6bf9bc-4dfzv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif0fa6e150f0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dfzv" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--4dfzv-" Jan 20 13:58:23.965176 containerd[2143]: 2026-01-20 13:58:23.879 [INFO][5602] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dfzv" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--4dfzv-eth0" Jan 20 13:58:23.965176 containerd[2143]: 2026-01-20 13:58:23.901 [INFO][5615] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138" HandleID="k8s-pod-network.954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138" Workload="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--4dfzv-eth0" Jan 20 13:58:23.965176 containerd[2143]: 2026-01-20 13:58:23.901 [INFO][5615] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138" HandleID="k8s-pod-network.954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138" Workload="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--4dfzv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-9999.1.1-f-6b32856eb5", "pod":"coredns-668d6bf9bc-4dfzv", "timestamp":"2026-01-20 13:58:23.901060443 +0000 UTC"}, Hostname:"ci-9999.1.1-f-6b32856eb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 13:58:23.965176 containerd[2143]: 2026-01-20 13:58:23.901 [INFO][5615] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 13:58:23.965176 containerd[2143]: 2026-01-20 13:58:23.901 [INFO][5615] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 13:58:23.965176 containerd[2143]: 2026-01-20 13:58:23.901 [INFO][5615] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.1.1-f-6b32856eb5' Jan 20 13:58:23.965176 containerd[2143]: 2026-01-20 13:58:23.909 [INFO][5615] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.965176 containerd[2143]: 2026-01-20 13:58:23.915 [INFO][5615] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.965176 containerd[2143]: 2026-01-20 13:58:23.918 [INFO][5615] ipam/ipam.go 511: Trying affinity for 192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.965176 containerd[2143]: 2026-01-20 13:58:23.919 [INFO][5615] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.965176 containerd[2143]: 2026-01-20 13:58:23.921 [INFO][5615] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.965176 containerd[2143]: 2026-01-20 13:58:23.921 [INFO][5615] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.965176 containerd[2143]: 2026-01-20 13:58:23.922 [INFO][5615] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138 Jan 20 13:58:23.965176 containerd[2143]: 2026-01-20 13:58:23.926 [INFO][5615] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.965176 containerd[2143]: 2026-01-20 13:58:23.936 [INFO][5615] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.72/26] block=192.168.36.64/26 handle="k8s-pod-network.954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.965176 containerd[2143]: 2026-01-20 13:58:23.936 [INFO][5615] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.72/26] handle="k8s-pod-network.954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138" host="ci-9999.1.1-f-6b32856eb5" Jan 20 13:58:23.965176 containerd[2143]: 2026-01-20 13:58:23.937 [INFO][5615] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 13:58:23.965176 containerd[2143]: 2026-01-20 13:58:23.937 [INFO][5615] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.72/26] IPv6=[] ContainerID="954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138" HandleID="k8s-pod-network.954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138" Workload="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--4dfzv-eth0" Jan 20 13:58:23.966885 containerd[2143]: 2026-01-20 13:58:23.939 [INFO][5602] cni-plugin/k8s.go 418: Populated endpoint ContainerID="954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dfzv" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--4dfzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--4dfzv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1660a4b3-622a-43b1-b762-2fd404939ef1", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 13, 57, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.1.1-f-6b32856eb5", ContainerID:"", Pod:"coredns-668d6bf9bc-4dfzv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0fa6e150f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 13:58:23.966885 containerd[2143]: 2026-01-20 13:58:23.939 [INFO][5602] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.72/32] ContainerID="954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dfzv" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--4dfzv-eth0" Jan 20 13:58:23.966885 containerd[2143]: 2026-01-20 13:58:23.939 [INFO][5602] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0fa6e150f0 ContainerID="954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dfzv" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--4dfzv-eth0" Jan 20 13:58:23.966885 containerd[2143]: 2026-01-20 13:58:23.942 [INFO][5602] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dfzv" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--4dfzv-eth0" Jan 20 13:58:23.966885 containerd[2143]: 2026-01-20 13:58:23.943 [INFO][5602] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dfzv" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--4dfzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--4dfzv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1660a4b3-622a-43b1-b762-2fd404939ef1", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 13, 57, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.1.1-f-6b32856eb5", ContainerID:"954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138", Pod:"coredns-668d6bf9bc-4dfzv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0fa6e150f0", MAC:"66:85:96:70:8f:e5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 13:58:23.966885 containerd[2143]: 2026-01-20 13:58:23.961 [INFO][5602] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dfzv" WorkloadEndpoint="ci--9999.1.1--f--6b32856eb5-k8s-coredns--668d6bf9bc--4dfzv-eth0" Jan 20 13:58:23.981000 audit[5629]: NETFILTER_CFG table=filter:138 family=2 entries=52 op=nft_register_chain pid=5629 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 13:58:23.981000 audit[5629]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23892 a0=3 a1=ffffcf467fe0 a2=0 a3=ffff94010fa8 items=0 ppid=4905 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:23.981000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 13:58:24.007053 kubelet[3709]: E0120 13:58:24.007015 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-zx59p" podUID="9e279828-5731-43f9-9e3d-019d423f62e9" Jan 20 13:58:24.021913 kubelet[3709]: I0120 13:58:24.021696 3709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9pmz2" podStartSLOduration=45.021680593 podStartE2EDuration="45.021680593s" podCreationTimestamp="2026-01-20 13:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 13:58:24.021264252 +0000 UTC m=+50.254705443" watchObservedRunningTime="2026-01-20 13:58:24.021680593 +0000 UTC m=+50.255121792" Jan 20 13:58:24.031040 containerd[2143]: time="2026-01-20T13:58:24.031005628Z" level=info msg="connecting to shim 954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138" address="unix:///run/containerd/s/d26c9c9c3c7017a9389c25a295e542d8a4999994e1304bbfeb7f1ec64d7ba2d4" namespace=k8s.io protocol=ttrpc version=3 Jan 20 13:58:24.061551 systemd[1]: Started cri-containerd-954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138.scope - libcontainer container 954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138. Jan 20 13:58:24.070000 audit[5663]: NETFILTER_CFG table=filter:139 family=2 entries=20 op=nft_register_rule pid=5663 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:58:24.070000 audit[5663]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe5751c90 a2=0 a3=1 items=0 ppid=3859 pid=5663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:24.070000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:58:24.075000 audit[5663]: NETFILTER_CFG table=nat:140 family=2 entries=14 op=nft_register_rule pid=5663 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:58:24.075000 audit[5663]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe5751c90 a2=0 a3=1 items=0 ppid=3859 pid=5663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:24.075000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:58:24.086000 audit: BPF prog-id=270 op=LOAD Jan 20 13:58:24.087000 audit: BPF prog-id=271 op=LOAD Jan 20 13:58:24.087000 audit[5651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=5640 pid=5651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:24.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935343335376562353665386537393766336363383664306430633334 Jan 20 13:58:24.087000 audit: BPF prog-id=271 op=UNLOAD Jan 20 13:58:24.087000 audit[5651]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5640 pid=5651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:24.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935343335376562353665386537393766336363383664306430633334 Jan 20 13:58:24.087000 audit: BPF prog-id=272 op=LOAD Jan 20 13:58:24.087000 audit[5651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=5640 pid=5651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:24.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935343335376562353665386537393766336363383664306430633334 Jan 20 13:58:24.087000 audit: BPF prog-id=273 op=LOAD Jan 20 13:58:24.087000 audit[5651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=5640 pid=5651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:24.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935343335376562353665386537393766336363383664306430633334 Jan 20 13:58:24.087000 audit: BPF prog-id=273 op=UNLOAD Jan 20 13:58:24.087000 audit[5651]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5640 pid=5651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:24.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935343335376562353665386537393766336363383664306430633334 Jan 20 13:58:24.087000 audit: BPF prog-id=272 op=UNLOAD Jan 20 13:58:24.087000 audit[5651]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5640 pid=5651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:24.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935343335376562353665386537393766336363383664306430633334 Jan 20 13:58:24.088000 audit: BPF prog-id=274 op=LOAD Jan 20 13:58:24.088000 audit[5651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=5640 pid=5651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:24.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935343335376562353665386537393766336363383664306430633334 Jan 20 13:58:24.104000 audit[5673]: NETFILTER_CFG table=filter:141 family=2 entries=17 op=nft_register_rule pid=5673 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:58:24.104000 audit[5673]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe2fe3eb0 a2=0 a3=1 items=0 ppid=3859 pid=5673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:24.104000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:58:24.110000 audit[5673]: NETFILTER_CFG table=nat:142 family=2 entries=35 op=nft_register_chain pid=5673 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:58:24.110000 audit[5673]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffe2fe3eb0 a2=0 a3=1 items=0 ppid=3859 pid=5673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:24.110000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:58:24.122804 containerd[2143]: time="2026-01-20T13:58:24.122732252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4dfzv,Uid:1660a4b3-622a-43b1-b762-2fd404939ef1,Namespace:kube-system,Attempt:0,} returns sandbox id \"954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138\"" Jan 20 13:58:24.125298 containerd[2143]: time="2026-01-20T13:58:24.125257801Z" level=info msg="CreateContainer within sandbox \"954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 13:58:24.148864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2947439440.mount: Deactivated successfully. Jan 20 13:58:24.150266 containerd[2143]: time="2026-01-20T13:58:24.150233527Z" level=info msg="Container b2e0321e2d26ca43ab17388e7333d0e47cf31fe8b98cf7af1bd9dbe8411667c1: CDI devices from CRI Config.CDIDevices: []" Jan 20 13:58:24.165534 containerd[2143]: time="2026-01-20T13:58:24.165499038Z" level=info msg="CreateContainer within sandbox \"954357eb56e8e797f3cc86d0d0c34005b3f84fee598223f88a0b744a90d23138\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b2e0321e2d26ca43ab17388e7333d0e47cf31fe8b98cf7af1bd9dbe8411667c1\"" Jan 20 13:58:24.167273 containerd[2143]: time="2026-01-20T13:58:24.167251275Z" level=info msg="StartContainer for \"b2e0321e2d26ca43ab17388e7333d0e47cf31fe8b98cf7af1bd9dbe8411667c1\"" Jan 20 13:58:24.168240 containerd[2143]: time="2026-01-20T13:58:24.168135654Z" level=info msg="connecting to shim b2e0321e2d26ca43ab17388e7333d0e47cf31fe8b98cf7af1bd9dbe8411667c1" address="unix:///run/containerd/s/d26c9c9c3c7017a9389c25a295e542d8a4999994e1304bbfeb7f1ec64d7ba2d4" protocol=ttrpc version=3 Jan 20 13:58:24.186593 systemd[1]: Started cri-containerd-b2e0321e2d26ca43ab17388e7333d0e47cf31fe8b98cf7af1bd9dbe8411667c1.scope - libcontainer container b2e0321e2d26ca43ab17388e7333d0e47cf31fe8b98cf7af1bd9dbe8411667c1. Jan 20 13:58:24.195000 audit: BPF prog-id=275 op=LOAD Jan 20 13:58:24.196000 audit: BPF prog-id=276 op=LOAD Jan 20 13:58:24.196000 audit[5683]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5640 pid=5683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:24.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232653033323165326432366361343361623137333838653733333364 Jan 20 13:58:24.196000 audit: BPF prog-id=276 op=UNLOAD Jan 20 13:58:24.196000 audit[5683]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5640 pid=5683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:24.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232653033323165326432366361343361623137333838653733333364 Jan 20 13:58:24.196000 audit: BPF prog-id=277 op=LOAD Jan 20 13:58:24.196000 audit[5683]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5640 pid=5683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:24.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232653033323165326432366361343361623137333838653733333364 Jan 20 13:58:24.196000 audit: BPF prog-id=278 op=LOAD Jan 20 13:58:24.196000 audit[5683]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5640 pid=5683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:24.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232653033323165326432366361343361623137333838653733333364 Jan 20 13:58:24.196000 audit: BPF prog-id=278 op=UNLOAD Jan 20 13:58:24.196000 audit[5683]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5640 pid=5683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:24.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232653033323165326432366361343361623137333838653733333364 Jan 20 13:58:24.197000 audit: BPF prog-id=277 op=UNLOAD Jan 20 13:58:24.197000 audit[5683]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5640 pid=5683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:24.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232653033323165326432366361343361623137333838653733333364 Jan 20 13:58:24.197000 audit: BPF prog-id=279 op=LOAD Jan 20 13:58:24.197000 audit[5683]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5640 pid=5683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:24.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232653033323165326432366361343361623137333838653733333364 Jan 20 13:58:24.218354 containerd[2143]: time="2026-01-20T13:58:24.218307745Z" level=info msg="StartContainer for \"b2e0321e2d26ca43ab17388e7333d0e47cf31fe8b98cf7af1bd9dbe8411667c1\" returns successfully" Jan 20 13:58:24.259983 systemd-networkd[1707]: calic1d11375376: Gained IPv6LL Jan 20 13:58:24.643557 systemd-networkd[1707]: cali7216b1a22ce: Gained IPv6LL Jan 20 13:58:25.013177 kubelet[3709]: E0120 13:58:25.012951 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-zx59p" podUID="9e279828-5731-43f9-9e3d-019d423f62e9" Jan 20 13:58:25.027962 kubelet[3709]: I0120 13:58:25.027913 3709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4dfzv" podStartSLOduration=46.02789955 podStartE2EDuration="46.02789955s" podCreationTimestamp="2026-01-20 13:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 13:58:25.027517579 +0000 UTC m=+51.260958770" watchObservedRunningTime="2026-01-20 13:58:25.02789955 +0000 UTC m=+51.261340741" Jan 20 13:58:25.053000 audit[5718]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:58:25.053000 audit[5718]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff68b8c20 a2=0 a3=1 items=0 ppid=3859 pid=5718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:25.053000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:58:25.064000 audit[5718]: NETFILTER_CFG table=nat:144 family=2 entries=44 op=nft_register_rule pid=5718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:58:25.064000 audit[5718]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffff68b8c20 a2=0 a3=1 items=0 ppid=3859 pid=5718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:25.064000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:58:25.155592 systemd-networkd[1707]: calif0fa6e150f0: Gained IPv6LL Jan 20 13:58:26.076000 audit[5720]: NETFILTER_CFG table=filter:145 family=2 entries=14 op=nft_register_rule pid=5720 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:58:26.081828 kernel: kauditd_printk_skb: 189 callbacks suppressed Jan 20 13:58:26.081882 kernel: audit: type=1325 audit(1768917506.076:778): table=filter:145 family=2 entries=14 op=nft_register_rule pid=5720 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:58:26.076000 audit[5720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff7a8afb0 a2=0 a3=1 items=0 ppid=3859 pid=5720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:26.111420 kernel: audit: type=1300 audit(1768917506.076:778): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff7a8afb0 a2=0 a3=1 items=0 ppid=3859 pid=5720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:26.076000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:58:26.121681 kernel: audit: type=1327 audit(1768917506.076:778): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:58:26.112000 audit[5720]: NETFILTER_CFG table=nat:146 family=2 entries=56 op=nft_register_chain pid=5720 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:58:26.132660 kernel: audit: type=1325 audit(1768917506.112:779): table=nat:146 family=2 entries=56 op=nft_register_chain pid=5720 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:58:26.112000 audit[5720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=fffff7a8afb0 a2=0 a3=1 items=0 ppid=3859 pid=5720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:26.152147 kernel: audit: type=1300 audit(1768917506.112:779): arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=fffff7a8afb0 a2=0 a3=1 items=0 ppid=3859 pid=5720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:58:26.112000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:58:26.162408 kernel: audit: type=1327 audit(1768917506.112:779): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:58:28.834781 containerd[2143]: time="2026-01-20T13:58:28.834531118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 13:58:29.162822 containerd[2143]: time="2026-01-20T13:58:29.162556552Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:58:29.165822 containerd[2143]: time="2026-01-20T13:58:29.165720688Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 13:58:29.165950 containerd[2143]: time="2026-01-20T13:58:29.165779530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 13:58:29.166493 kubelet[3709]: E0120 13:58:29.166159 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 13:58:29.166493 kubelet[3709]: E0120 13:58:29.166242 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 13:58:29.166493 kubelet[3709]: E0120 13:58:29.166405 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a3d21d502bf249319790de3c1cf28ae0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q6tsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77799d95d6-2jfd5_calico-system(f1bb23d8-4b1f-4aa2-8b63-89adfd023536): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 13:58:29.169320 containerd[2143]: time="2026-01-20T13:58:29.169092207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 13:58:29.531423 containerd[2143]: time="2026-01-20T13:58:29.531251132Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:58:29.535170 containerd[2143]: time="2026-01-20T13:58:29.535065185Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 13:58:29.535170 containerd[2143]: time="2026-01-20T13:58:29.535154676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 13:58:29.535538 kubelet[3709]: E0120 13:58:29.535494 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 13:58:29.536024 kubelet[3709]: E0120 13:58:29.535722 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 13:58:29.536024 kubelet[3709]: E0120 13:58:29.535837 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6tsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77799d95d6-2jfd5_calico-system(f1bb23d8-4b1f-4aa2-8b63-89adfd023536): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 13:58:29.537279 kubelet[3709]: E0120 13:58:29.537237 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77799d95d6-2jfd5" podUID="f1bb23d8-4b1f-4aa2-8b63-89adfd023536" Jan 20 13:58:32.833818 containerd[2143]: time="2026-01-20T13:58:32.833557270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 13:58:33.092881 containerd[2143]: time="2026-01-20T13:58:33.092679990Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:58:33.097415 containerd[2143]: time="2026-01-20T13:58:33.097293099Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 13:58:33.097415 containerd[2143]: time="2026-01-20T13:58:33.097369549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 13:58:33.097682 kubelet[3709]: E0120 13:58:33.097636 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 13:58:33.098086 kubelet[3709]: E0120 13:58:33.097695 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 13:58:33.098086 kubelet[3709]: E0120 13:58:33.097805 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wj6nr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68d86775db-xr92p_calico-system(d927ba29-b3e9-4c16-90f8-d925c765f0ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 13:58:33.099245 kubelet[3709]: E0120 13:58:33.098999 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68d86775db-xr92p" podUID="d927ba29-b3e9-4c16-90f8-d925c765f0ce" Jan 20 13:58:34.833399 containerd[2143]: time="2026-01-20T13:58:34.833342091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 13:58:35.082449 containerd[2143]: time="2026-01-20T13:58:35.082371279Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:58:35.085526 containerd[2143]: time="2026-01-20T13:58:35.085434092Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 13:58:35.085741 containerd[2143]: time="2026-01-20T13:58:35.085661491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 13:58:35.085918 kubelet[3709]: E0120 13:58:35.085854 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 13:58:35.086572 kubelet[3709]: E0120 13:58:35.085924 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 13:58:35.086572 kubelet[3709]: E0120 13:58:35.086048 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9rlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7dc74856cf-7w8mb_calico-apiserver(5c39ce3f-4fc4-445a-adab-faa3675de94e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 13:58:35.087226 kubelet[3709]: E0120 13:58:35.087194 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-7w8mb" podUID="5c39ce3f-4fc4-445a-adab-faa3675de94e" Jan 20 13:58:35.833430 containerd[2143]: time="2026-01-20T13:58:35.832962660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 13:58:36.090932 containerd[2143]: time="2026-01-20T13:58:36.090749089Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:58:36.093722 containerd[2143]: time="2026-01-20T13:58:36.093626576Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 13:58:36.093722 containerd[2143]: time="2026-01-20T13:58:36.093682954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 13:58:36.094191 kubelet[3709]: E0120 13:58:36.093980 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 13:58:36.094191 kubelet[3709]: E0120 13:58:36.094038 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 13:58:36.094191 kubelet[3709]: E0120 13:58:36.094153 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftvn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9ztjh_calico-system(85bd9e66-9697-4ce6-b203-5bf288af5ea8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 13:58:36.095398 kubelet[3709]: E0120 13:58:36.095336 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztjh" podUID="85bd9e66-9697-4ce6-b203-5bf288af5ea8" Jan 20 13:58:36.833714 containerd[2143]: time="2026-01-20T13:58:36.833063707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 13:58:37.094051 containerd[2143]: time="2026-01-20T13:58:37.093922117Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:58:37.097308 containerd[2143]: time="2026-01-20T13:58:37.097168792Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 13:58:37.097308 containerd[2143]: time="2026-01-20T13:58:37.097258162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 13:58:37.097449 kubelet[3709]: E0120 13:58:37.097417 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 13:58:37.097704 kubelet[3709]: E0120 13:58:37.097465 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 13:58:37.097704 kubelet[3709]: E0120 13:58:37.097564 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-szm87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vhqxx_calico-system(23535579-9237-4da3-a34f-0ccbbd6a2ee0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 13:58:37.099662 containerd[2143]: time="2026-01-20T13:58:37.099615850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 13:58:37.377947 containerd[2143]: time="2026-01-20T13:58:37.377582571Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:58:37.380489 containerd[2143]: time="2026-01-20T13:58:37.380451090Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 13:58:37.380570 containerd[2143]: time="2026-01-20T13:58:37.380537957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 13:58:37.380739 kubelet[3709]: E0120 13:58:37.380692 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 13:58:37.380885 kubelet[3709]: E0120 13:58:37.380831 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 13:58:37.381129 kubelet[3709]: E0120 13:58:37.381049 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-szm87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vhqxx_calico-system(23535579-9237-4da3-a34f-0ccbbd6a2ee0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 13:58:37.382432 kubelet[3709]: E0120 13:58:37.382374 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vhqxx" podUID="23535579-9237-4da3-a34f-0ccbbd6a2ee0" Jan 20 13:58:37.832535 containerd[2143]: time="2026-01-20T13:58:37.832312096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 13:58:38.142689 containerd[2143]: time="2026-01-20T13:58:38.142431801Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:58:38.145595 containerd[2143]: time="2026-01-20T13:58:38.145534159Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 13:58:38.145856 containerd[2143]: time="2026-01-20T13:58:38.145594713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 13:58:38.145982 kubelet[3709]: E0120 13:58:38.145885 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 13:58:38.145982 kubelet[3709]: E0120 13:58:38.145944 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 13:58:38.147113 kubelet[3709]: E0120 13:58:38.146053 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7rhtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7dc74856cf-zx59p_calico-apiserver(9e279828-5731-43f9-9e3d-019d423f62e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 13:58:38.147343 kubelet[3709]: E0120 13:58:38.147235 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-zx59p" podUID="9e279828-5731-43f9-9e3d-019d423f62e9" Jan 20 13:58:41.835020 kubelet[3709]: E0120 13:58:41.834978 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77799d95d6-2jfd5" podUID="f1bb23d8-4b1f-4aa2-8b63-89adfd023536" Jan 20 13:58:44.832432 kubelet[3709]: E0120 13:58:44.831822 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68d86775db-xr92p" podUID="d927ba29-b3e9-4c16-90f8-d925c765f0ce" Jan 20 13:58:48.833424 kubelet[3709]: E0120 13:58:48.833357 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vhqxx" podUID="23535579-9237-4da3-a34f-0ccbbd6a2ee0" Jan 20 13:58:50.833005 kubelet[3709]: E0120 13:58:50.832969 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztjh" podUID="85bd9e66-9697-4ce6-b203-5bf288af5ea8" Jan 20 13:58:50.833472 kubelet[3709]: E0120 13:58:50.833022 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-7w8mb" podUID="5c39ce3f-4fc4-445a-adab-faa3675de94e" Jan 20 13:58:52.833120 kubelet[3709]: E0120 13:58:52.832804 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-zx59p" podUID="9e279828-5731-43f9-9e3d-019d423f62e9" Jan 20 13:58:54.834085 containerd[2143]: time="2026-01-20T13:58:54.833980538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 13:58:55.086769 containerd[2143]: time="2026-01-20T13:58:55.086728934Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:58:55.089647 containerd[2143]: time="2026-01-20T13:58:55.089610262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 13:58:55.089647 containerd[2143]: time="2026-01-20T13:58:55.089665919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 13:58:55.089827 kubelet[3709]: E0120 13:58:55.089788 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 13:58:55.090097 kubelet[3709]: E0120 13:58:55.089840 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 13:58:55.090097 kubelet[3709]: E0120 13:58:55.089944 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a3d21d502bf249319790de3c1cf28ae0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q6tsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77799d95d6-2jfd5_calico-system(f1bb23d8-4b1f-4aa2-8b63-89adfd023536): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 13:58:55.091940 containerd[2143]: time="2026-01-20T13:58:55.091911828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 13:58:55.354454 containerd[2143]: time="2026-01-20T13:58:55.353979555Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:58:55.357061 containerd[2143]: time="2026-01-20T13:58:55.357014695Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 13:58:55.357155 containerd[2143]: time="2026-01-20T13:58:55.357108138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 13:58:55.357960 kubelet[3709]: E0120 13:58:55.357921 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 13:58:55.358046 kubelet[3709]: E0120 13:58:55.357970 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 13:58:55.358086 kubelet[3709]: E0120 13:58:55.358062 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6tsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77799d95d6-2jfd5_calico-system(f1bb23d8-4b1f-4aa2-8b63-89adfd023536): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 13:58:55.359534 kubelet[3709]: E0120 13:58:55.359465 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77799d95d6-2jfd5" podUID="f1bb23d8-4b1f-4aa2-8b63-89adfd023536" Jan 20 13:58:59.835131 containerd[2143]: time="2026-01-20T13:58:59.833928425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 13:59:00.089940 containerd[2143]: time="2026-01-20T13:59:00.089816108Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:59:00.093221 containerd[2143]: time="2026-01-20T13:59:00.093176410Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 13:59:00.093904 containerd[2143]: time="2026-01-20T13:59:00.093247580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 13:59:00.093904 containerd[2143]: time="2026-01-20T13:59:00.093666712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 13:59:00.093973 kubelet[3709]: E0120 13:59:00.093366 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 13:59:00.093973 kubelet[3709]: E0120 13:59:00.093439 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 13:59:00.093973 kubelet[3709]: E0120 13:59:00.093823 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-szm87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vhqxx_calico-system(23535579-9237-4da3-a34f-0ccbbd6a2ee0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 13:59:00.358474 containerd[2143]: time="2026-01-20T13:59:00.358345085Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:59:00.361314 containerd[2143]: time="2026-01-20T13:59:00.361272990Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 13:59:00.361708 containerd[2143]: time="2026-01-20T13:59:00.361355432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 13:59:00.361777 kubelet[3709]: E0120 13:59:00.361505 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 13:59:00.361777 kubelet[3709]: E0120 13:59:00.361565 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 13:59:00.361840 kubelet[3709]: E0120 13:59:00.361761 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wj6nr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68d86775db-xr92p_calico-system(d927ba29-b3e9-4c16-90f8-d925c765f0ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 13:59:00.362076 containerd[2143]: time="2026-01-20T13:59:00.362054918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 13:59:00.363472 kubelet[3709]: E0120 13:59:00.363337 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68d86775db-xr92p" podUID="d927ba29-b3e9-4c16-90f8-d925c765f0ce" Jan 20 13:59:00.639779 containerd[2143]: time="2026-01-20T13:59:00.639063271Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:59:00.642839 containerd[2143]: time="2026-01-20T13:59:00.642700885Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 13:59:00.642839 containerd[2143]: time="2026-01-20T13:59:00.642790336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 13:59:00.643172 kubelet[3709]: E0120 13:59:00.643118 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 13:59:00.643353 kubelet[3709]: E0120 13:59:00.643261 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 13:59:00.643539 kubelet[3709]: E0120 13:59:00.643513 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-szm87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vhqxx_calico-system(23535579-9237-4da3-a34f-0ccbbd6a2ee0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 13:59:00.644845 kubelet[3709]: E0120 13:59:00.644805 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vhqxx" podUID="23535579-9237-4da3-a34f-0ccbbd6a2ee0" Jan 20 13:59:03.834674 containerd[2143]: time="2026-01-20T13:59:03.834614238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 13:59:04.225788 containerd[2143]: time="2026-01-20T13:59:04.225523497Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:59:04.228947 containerd[2143]: time="2026-01-20T13:59:04.228774923Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 13:59:04.228947 containerd[2143]: time="2026-01-20T13:59:04.228875742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 13:59:04.229079 kubelet[3709]: E0120 13:59:04.229031 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 13:59:04.229510 kubelet[3709]: E0120 13:59:04.229079 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 13:59:04.229510 kubelet[3709]: E0120 13:59:04.229267 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7rhtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7dc74856cf-zx59p_calico-apiserver(9e279828-5731-43f9-9e3d-019d423f62e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 13:59:04.229807 containerd[2143]: time="2026-01-20T13:59:04.229607988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 13:59:04.230804 kubelet[3709]: E0120 13:59:04.230712 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-zx59p" podUID="9e279828-5731-43f9-9e3d-019d423f62e9" Jan 20 13:59:04.467698 containerd[2143]: time="2026-01-20T13:59:04.467648500Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:59:04.470721 containerd[2143]: time="2026-01-20T13:59:04.470625798Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 13:59:04.470721 containerd[2143]: time="2026-01-20T13:59:04.470676848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 13:59:04.470855 kubelet[3709]: E0120 13:59:04.470825 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 13:59:04.470900 kubelet[3709]: E0120 13:59:04.470871 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 13:59:04.471045 kubelet[3709]: E0120 13:59:04.470981 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftvn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9ztjh_calico-system(85bd9e66-9697-4ce6-b203-5bf288af5ea8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 13:59:04.472154 kubelet[3709]: E0120 13:59:04.472114 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztjh" podUID="85bd9e66-9697-4ce6-b203-5bf288af5ea8" Jan 20 13:59:05.832699 containerd[2143]: time="2026-01-20T13:59:05.832566332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 13:59:06.077120 containerd[2143]: time="2026-01-20T13:59:06.077066559Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:59:06.079909 containerd[2143]: time="2026-01-20T13:59:06.079874604Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 13:59:06.079995 containerd[2143]: time="2026-01-20T13:59:06.079956702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 13:59:06.080235 kubelet[3709]: E0120 13:59:06.080181 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 13:59:06.080688 kubelet[3709]: E0120 13:59:06.080572 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 13:59:06.080688 kubelet[3709]: E0120 13:59:06.080975 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9rlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7dc74856cf-7w8mb_calico-apiserver(5c39ce3f-4fc4-445a-adab-faa3675de94e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 13:59:06.086660 kubelet[3709]: E0120 13:59:06.086618 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-7w8mb" podUID="5c39ce3f-4fc4-445a-adab-faa3675de94e" Jan 20 13:59:09.833134 kubelet[3709]: E0120 13:59:09.832613 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77799d95d6-2jfd5" podUID="f1bb23d8-4b1f-4aa2-8b63-89adfd023536" Jan 20 13:59:12.833161 kubelet[3709]: E0120 13:59:12.832349 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68d86775db-xr92p" podUID="d927ba29-b3e9-4c16-90f8-d925c765f0ce" Jan 20 13:59:14.834731 kubelet[3709]: E0120 13:59:14.833980 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vhqxx" podUID="23535579-9237-4da3-a34f-0ccbbd6a2ee0" Jan 20 13:59:17.812006 systemd[1]: Started sshd@7-10.200.20.32:22-10.200.16.10:47886.service - OpenSSH per-connection server daemon (10.200.16.10:47886). Jan 20 13:59:17.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.32:22-10.200.16.10:47886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:17.829408 kernel: audit: type=1130 audit(1768917557.811:780): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.32:22-10.200.16.10:47886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:17.832699 kubelet[3709]: E0120 13:59:17.832669 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztjh" podUID="85bd9e66-9697-4ce6-b203-5bf288af5ea8" Jan 20 13:59:17.833278 kubelet[3709]: E0120 13:59:17.832939 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-zx59p" podUID="9e279828-5731-43f9-9e3d-019d423f62e9" Jan 20 13:59:18.246000 audit[5790]: USER_ACCT pid=5790 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:18.250552 sshd[5790]: Accepted publickey for core from 10.200.16.10 port 47886 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 13:59:18.263795 sshd-session[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:59:18.262000 audit[5790]: CRED_ACQ pid=5790 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:18.271148 systemd-logind[2119]: New session 11 of user core. Jan 20 13:59:18.282260 kernel: audit: type=1101 audit(1768917558.246:781): pid=5790 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:18.282348 kernel: audit: type=1103 audit(1768917558.262:782): pid=5790 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:18.284599 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 13:59:18.292172 kernel: audit: type=1006 audit(1768917558.262:783): pid=5790 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 20 13:59:18.262000 audit[5790]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffecb64fe0 a2=3 a3=0 items=0 ppid=1 pid=5790 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:18.309212 kernel: audit: type=1300 audit(1768917558.262:783): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffecb64fe0 a2=3 a3=0 items=0 ppid=1 pid=5790 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:18.262000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 13:59:18.316614 kernel: audit: type=1327 audit(1768917558.262:783): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 13:59:18.310000 audit[5790]: USER_START pid=5790 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:18.335947 kernel: audit: type=1105 audit(1768917558.310:784): pid=5790 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:18.313000 audit[5794]: CRED_ACQ pid=5794 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:18.350826 kernel: audit: type=1103 audit(1768917558.313:785): pid=5794 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:18.522249 sshd[5794]: Connection closed by 10.200.16.10 port 47886 Jan 20 13:59:18.523581 sshd-session[5790]: pam_unix(sshd:session): session closed for user core Jan 20 13:59:18.525000 audit[5790]: USER_END pid=5790 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:18.527984 systemd-logind[2119]: Session 11 logged out. Waiting for processes to exit. Jan 20 13:59:18.529891 systemd[1]: sshd@7-10.200.20.32:22-10.200.16.10:47886.service: Deactivated successfully. Jan 20 13:59:18.533862 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 13:59:18.536268 systemd-logind[2119]: Removed session 11. Jan 20 13:59:18.525000 audit[5790]: CRED_DISP pid=5790 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:18.559490 kernel: audit: type=1106 audit(1768917558.525:786): pid=5790 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:18.559569 kernel: audit: type=1104 audit(1768917558.525:787): pid=5790 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:18.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.32:22-10.200.16.10:47886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:20.833054 kubelet[3709]: E0120 13:59:20.832778 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-7w8mb" podUID="5c39ce3f-4fc4-445a-adab-faa3675de94e" Jan 20 13:59:21.833644 kubelet[3709]: E0120 13:59:21.833578 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77799d95d6-2jfd5" podUID="f1bb23d8-4b1f-4aa2-8b63-89adfd023536" Jan 20 13:59:23.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.32:22-10.200.16.10:57372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:23.614644 systemd[1]: Started sshd@8-10.200.20.32:22-10.200.16.10:57372.service - OpenSSH per-connection server daemon (10.200.16.10:57372). Jan 20 13:59:23.617917 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 13:59:23.618723 kernel: audit: type=1130 audit(1768917563.614:789): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.32:22-10.200.16.10:57372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:24.053000 audit[5835]: USER_ACCT pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:24.054627 sshd[5835]: Accepted publickey for core from 10.200.16.10 port 57372 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 13:59:24.072233 sshd-session[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:59:24.069000 audit[5835]: CRED_ACQ pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:24.089270 kernel: audit: type=1101 audit(1768917564.053:790): pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:24.090477 kernel: audit: type=1103 audit(1768917564.069:791): pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:24.104265 kernel: audit: type=1006 audit(1768917564.069:792): pid=5835 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 20 13:59:24.096040 systemd-logind[2119]: New session 12 of user core. Jan 20 13:59:24.107574 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 13:59:24.069000 audit[5835]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc183a1d0 a2=3 a3=0 items=0 ppid=1 pid=5835 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:24.126495 kernel: audit: type=1300 audit(1768917564.069:792): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc183a1d0 a2=3 a3=0 items=0 ppid=1 pid=5835 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:24.069000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 13:59:24.140418 kernel: audit: type=1327 audit(1768917564.069:792): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 13:59:24.114000 audit[5835]: USER_START pid=5835 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:24.160798 kernel: audit: type=1105 audit(1768917564.114:793): pid=5835 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:24.130000 audit[5839]: CRED_ACQ pid=5839 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:24.176110 kernel: audit: type=1103 audit(1768917564.130:794): pid=5839 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:24.353633 sshd[5839]: Connection closed by 10.200.16.10 port 57372 Jan 20 13:59:24.353952 sshd-session[5835]: pam_unix(sshd:session): session closed for user core Jan 20 13:59:24.355000 audit[5835]: USER_END pid=5835 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:24.359182 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 13:59:24.361128 systemd-logind[2119]: Session 12 logged out. Waiting for processes to exit. Jan 20 13:59:24.361978 systemd[1]: sshd@8-10.200.20.32:22-10.200.16.10:57372.service: Deactivated successfully. Jan 20 13:59:24.366035 systemd-logind[2119]: Removed session 12. Jan 20 13:59:24.355000 audit[5835]: CRED_DISP pid=5835 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:24.392410 kernel: audit: type=1106 audit(1768917564.355:795): pid=5835 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:24.392486 kernel: audit: type=1104 audit(1768917564.355:796): pid=5835 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:24.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.32:22-10.200.16.10:57372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:24.832391 kubelet[3709]: E0120 13:59:24.832197 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68d86775db-xr92p" podUID="d927ba29-b3e9-4c16-90f8-d925c765f0ce" Jan 20 13:59:25.834944 kubelet[3709]: E0120 13:59:25.834083 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vhqxx" podUID="23535579-9237-4da3-a34f-0ccbbd6a2ee0" Jan 20 13:59:28.832551 kubelet[3709]: E0120 13:59:28.832479 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztjh" podUID="85bd9e66-9697-4ce6-b203-5bf288af5ea8" Jan 20 13:59:29.439637 systemd[1]: Started sshd@9-10.200.20.32:22-10.200.16.10:57374.service - OpenSSH per-connection server daemon (10.200.16.10:57374). Jan 20 13:59:29.453709 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 13:59:29.453818 kernel: audit: type=1130 audit(1768917569.438:798): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.32:22-10.200.16.10:57374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:29.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.32:22-10.200.16.10:57374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:29.841000 audit[5852]: USER_ACCT pid=5852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:29.858737 sshd[5852]: Accepted publickey for core from 10.200.16.10 port 57374 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 13:59:29.858000 audit[5852]: CRED_ACQ pid=5852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:29.861409 sshd-session[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:59:29.869822 systemd-logind[2119]: New session 13 of user core. Jan 20 13:59:29.874199 kernel: audit: type=1101 audit(1768917569.841:799): pid=5852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:29.874264 kernel: audit: type=1103 audit(1768917569.858:800): pid=5852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:29.877609 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 13:59:29.884097 kernel: audit: type=1006 audit(1768917569.858:801): pid=5852 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 20 13:59:29.858000 audit[5852]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff7f816e0 a2=3 a3=0 items=0 ppid=1 pid=5852 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:29.904207 kernel: audit: type=1300 audit(1768917569.858:801): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff7f816e0 a2=3 a3=0 items=0 ppid=1 pid=5852 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:29.858000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 13:59:29.913905 kernel: audit: type=1327 audit(1768917569.858:801): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 13:59:29.880000 audit[5852]: USER_START pid=5852 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:29.934096 kernel: audit: type=1105 audit(1768917569.880:802): pid=5852 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:29.884000 audit[5856]: CRED_ACQ pid=5856 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:29.951590 kernel: audit: type=1103 audit(1768917569.884:803): pid=5856 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:30.110471 sshd[5856]: Connection closed by 10.200.16.10 port 57374 Jan 20 13:59:30.112024 sshd-session[5852]: pam_unix(sshd:session): session closed for user core Jan 20 13:59:30.112000 audit[5852]: USER_END pid=5852 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:30.116781 systemd[1]: sshd@9-10.200.20.32:22-10.200.16.10:57374.service: Deactivated successfully. Jan 20 13:59:30.117102 systemd-logind[2119]: Session 13 logged out. Waiting for processes to exit. Jan 20 13:59:30.120139 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 13:59:30.125404 systemd-logind[2119]: Removed session 13. Jan 20 13:59:30.112000 audit[5852]: CRED_DISP pid=5852 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:30.147144 kernel: audit: type=1106 audit(1768917570.112:804): pid=5852 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:30.147218 kernel: audit: type=1104 audit(1768917570.112:805): pid=5852 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:30.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.32:22-10.200.16.10:57374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:30.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.32:22-10.200.16.10:56680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:30.203201 systemd[1]: Started sshd@10-10.200.20.32:22-10.200.16.10:56680.service - OpenSSH per-connection server daemon (10.200.16.10:56680). Jan 20 13:59:30.631000 audit[5868]: USER_ACCT pid=5868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:30.632790 sshd[5868]: Accepted publickey for core from 10.200.16.10 port 56680 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 13:59:30.632000 audit[5868]: CRED_ACQ pid=5868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:30.632000 audit[5868]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeecfed00 a2=3 a3=0 items=0 ppid=1 pid=5868 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:30.632000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 13:59:30.635769 sshd-session[5868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:59:30.642248 systemd-logind[2119]: New session 14 of user core. Jan 20 13:59:30.644565 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 13:59:30.647000 audit[5868]: USER_START pid=5868 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:30.649000 audit[5872]: CRED_ACQ pid=5872 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:30.832263 kubelet[3709]: E0120 13:59:30.832219 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-zx59p" podUID="9e279828-5731-43f9-9e3d-019d423f62e9" Jan 20 13:59:30.955175 sshd[5872]: Connection closed by 10.200.16.10 port 56680 Jan 20 13:59:30.955065 sshd-session[5868]: pam_unix(sshd:session): session closed for user core Jan 20 13:59:30.956000 audit[5868]: USER_END pid=5868 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:30.956000 audit[5868]: CRED_DISP pid=5868 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:30.960741 systemd-logind[2119]: Session 14 logged out. Waiting for processes to exit. Jan 20 13:59:30.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.32:22-10.200.16.10:56680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:30.960804 systemd[1]: sshd@10-10.200.20.32:22-10.200.16.10:56680.service: Deactivated successfully. Jan 20 13:59:30.963021 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 13:59:30.966525 systemd-logind[2119]: Removed session 14. Jan 20 13:59:31.046461 systemd[1]: Started sshd@11-10.200.20.32:22-10.200.16.10:56682.service - OpenSSH per-connection server daemon (10.200.16.10:56682). Jan 20 13:59:31.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.32:22-10.200.16.10:56682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:31.467000 audit[5881]: USER_ACCT pid=5881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:31.469508 sshd[5881]: Accepted publickey for core from 10.200.16.10 port 56682 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 13:59:31.468000 audit[5881]: CRED_ACQ pid=5881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:31.468000 audit[5881]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd8d23260 a2=3 a3=0 items=0 ppid=1 pid=5881 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:31.468000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 13:59:31.471980 sshd-session[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:59:31.476598 systemd-logind[2119]: New session 15 of user core. Jan 20 13:59:31.481547 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 13:59:31.483000 audit[5881]: USER_START pid=5881 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:31.484000 audit[5885]: CRED_ACQ pid=5885 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:31.752137 sshd[5885]: Connection closed by 10.200.16.10 port 56682 Jan 20 13:59:31.750943 sshd-session[5881]: pam_unix(sshd:session): session closed for user core Jan 20 13:59:31.751000 audit[5881]: USER_END pid=5881 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:31.752000 audit[5881]: CRED_DISP pid=5881 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:31.757783 systemd-logind[2119]: Session 15 logged out. Waiting for processes to exit. Jan 20 13:59:31.758025 systemd[1]: sshd@11-10.200.20.32:22-10.200.16.10:56682.service: Deactivated successfully. Jan 20 13:59:31.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.32:22-10.200.16.10:56682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:31.760424 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 13:59:31.763084 systemd-logind[2119]: Removed session 15. Jan 20 13:59:32.833509 kubelet[3709]: E0120 13:59:32.833278 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-7w8mb" podUID="5c39ce3f-4fc4-445a-adab-faa3675de94e" Jan 20 13:59:36.834206 containerd[2143]: time="2026-01-20T13:59:36.833857497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 13:59:36.835313 kubelet[3709]: E0120 13:59:36.835270 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vhqxx" podUID="23535579-9237-4da3-a34f-0ccbbd6a2ee0" Jan 20 13:59:36.838621 systemd[1]: Started sshd@12-10.200.20.32:22-10.200.16.10:56684.service - OpenSSH per-connection server daemon (10.200.16.10:56684). Jan 20 13:59:36.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.32:22-10.200.16.10:56684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:36.841906 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 20 13:59:36.841989 kernel: audit: type=1130 audit(1768917576.837:825): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.32:22-10.200.16.10:56684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:37.149344 containerd[2143]: time="2026-01-20T13:59:37.149292243Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:59:37.152400 containerd[2143]: time="2026-01-20T13:59:37.152332190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 13:59:37.152479 containerd[2143]: time="2026-01-20T13:59:37.152401280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 13:59:37.153545 kubelet[3709]: E0120 13:59:37.153483 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 13:59:37.153722 kubelet[3709]: E0120 13:59:37.153536 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 13:59:37.154149 kubelet[3709]: E0120 13:59:37.154113 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a3d21d502bf249319790de3c1cf28ae0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q6tsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77799d95d6-2jfd5_calico-system(f1bb23d8-4b1f-4aa2-8b63-89adfd023536): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 13:59:37.156357 containerd[2143]: time="2026-01-20T13:59:37.156171481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 13:59:37.272000 audit[5902]: USER_ACCT pid=5902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:37.290923 sshd[5902]: Accepted publickey for core from 10.200.16.10 port 56684 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 13:59:37.290000 audit[5902]: CRED_ACQ pid=5902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:37.292106 sshd-session[5902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:59:37.305113 kernel: audit: type=1101 audit(1768917577.272:826): pid=5902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:37.305186 kernel: audit: type=1103 audit(1768917577.290:827): pid=5902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:37.310590 systemd-logind[2119]: New session 16 of user core. Jan 20 13:59:37.319446 kernel: audit: type=1006 audit(1768917577.290:828): pid=5902 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 20 13:59:37.290000 audit[5902]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdc6dbee0 a2=3 a3=0 items=0 ppid=1 pid=5902 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:37.337256 kernel: audit: type=1300 audit(1768917577.290:828): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdc6dbee0 a2=3 a3=0 items=0 ppid=1 pid=5902 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:37.338546 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 13:59:37.290000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 13:59:37.346089 kernel: audit: type=1327 audit(1768917577.290:828): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 13:59:37.347000 audit[5902]: USER_START pid=5902 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:37.368152 kernel: audit: type=1105 audit(1768917577.347:829): pid=5902 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:37.368000 audit[5906]: CRED_ACQ pid=5906 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:37.384807 kernel: audit: type=1103 audit(1768917577.368:830): pid=5906 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:37.447604 containerd[2143]: time="2026-01-20T13:59:37.447019819Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:59:37.450437 containerd[2143]: time="2026-01-20T13:59:37.449973059Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 13:59:37.450437 containerd[2143]: time="2026-01-20T13:59:37.450057758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 13:59:37.450812 kubelet[3709]: E0120 13:59:37.450723 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 13:59:37.451304 kubelet[3709]: E0120 13:59:37.451279 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 13:59:37.451506 kubelet[3709]: E0120 13:59:37.451473 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6tsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77799d95d6-2jfd5_calico-system(f1bb23d8-4b1f-4aa2-8b63-89adfd023536): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 13:59:37.455298 kubelet[3709]: E0120 13:59:37.453160 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77799d95d6-2jfd5" podUID="f1bb23d8-4b1f-4aa2-8b63-89adfd023536" Jan 20 13:59:37.573903 sshd[5906]: Connection closed by 10.200.16.10 port 56684 Jan 20 13:59:37.574508 sshd-session[5902]: pam_unix(sshd:session): session closed for user core Jan 20 13:59:37.574000 audit[5902]: USER_END pid=5902 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:37.579563 systemd[1]: sshd@12-10.200.20.32:22-10.200.16.10:56684.service: Deactivated successfully. Jan 20 13:59:37.581666 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 13:59:37.575000 audit[5902]: CRED_DISP pid=5902 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:37.610946 kernel: audit: type=1106 audit(1768917577.574:831): pid=5902 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:37.611021 kernel: audit: type=1104 audit(1768917577.575:832): pid=5902 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:37.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.32:22-10.200.16.10:56684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:37.612612 systemd-logind[2119]: Session 16 logged out. Waiting for processes to exit. Jan 20 13:59:37.613557 systemd-logind[2119]: Removed session 16. Jan 20 13:59:39.834087 kubelet[3709]: E0120 13:59:39.834037 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68d86775db-xr92p" podUID="d927ba29-b3e9-4c16-90f8-d925c765f0ce" Jan 20 13:59:39.834495 kubelet[3709]: E0120 13:59:39.834336 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztjh" podUID="85bd9e66-9697-4ce6-b203-5bf288af5ea8" Jan 20 13:59:42.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.32:22-10.200.16.10:42406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:42.664035 systemd[1]: Started sshd@13-10.200.20.32:22-10.200.16.10:42406.service - OpenSSH per-connection server daemon (10.200.16.10:42406). Jan 20 13:59:42.667562 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 13:59:42.667900 kernel: audit: type=1130 audit(1768917582.662:834): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.32:22-10.200.16.10:42406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:42.832181 kubelet[3709]: E0120 13:59:42.832135 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-zx59p" podUID="9e279828-5731-43f9-9e3d-019d423f62e9" Jan 20 13:59:43.099988 sshd[5925]: Accepted publickey for core from 10.200.16.10 port 42406 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 13:59:43.098000 audit[5925]: USER_ACCT pid=5925 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:43.120233 sshd-session[5925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:59:43.118000 audit[5925]: CRED_ACQ pid=5925 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:43.142493 kernel: audit: type=1101 audit(1768917583.098:835): pid=5925 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:43.142569 kernel: audit: type=1103 audit(1768917583.118:836): pid=5925 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:43.155466 kernel: audit: type=1006 audit(1768917583.118:837): pid=5925 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 20 13:59:43.147765 systemd-logind[2119]: New session 17 of user core. Jan 20 13:59:43.157574 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 13:59:43.118000 audit[5925]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff0a0ca80 a2=3 a3=0 items=0 ppid=1 pid=5925 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:43.176901 kernel: audit: type=1300 audit(1768917583.118:837): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff0a0ca80 a2=3 a3=0 items=0 ppid=1 pid=5925 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:43.118000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 13:59:43.185210 kernel: audit: type=1327 audit(1768917583.118:837): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 13:59:43.183000 audit[5925]: USER_START pid=5925 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:43.206799 kernel: audit: type=1105 audit(1768917583.183:838): pid=5925 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:43.185000 audit[5929]: CRED_ACQ pid=5929 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:43.221671 kernel: audit: type=1103 audit(1768917583.185:839): pid=5929 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:43.400446 sshd[5929]: Connection closed by 10.200.16.10 port 42406 Jan 20 13:59:43.401171 sshd-session[5925]: pam_unix(sshd:session): session closed for user core Jan 20 13:59:43.400000 audit[5925]: USER_END pid=5925 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:43.404706 systemd-logind[2119]: Session 17 logged out. Waiting for processes to exit. Jan 20 13:59:43.406458 systemd[1]: sshd@13-10.200.20.32:22-10.200.16.10:42406.service: Deactivated successfully. Jan 20 13:59:43.408661 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 13:59:43.411057 systemd-logind[2119]: Removed session 17. Jan 20 13:59:43.401000 audit[5925]: CRED_DISP pid=5925 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:43.440325 kernel: audit: type=1106 audit(1768917583.400:840): pid=5925 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:43.440409 kernel: audit: type=1104 audit(1768917583.401:841): pid=5925 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:43.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.32:22-10.200.16.10:42406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:47.834856 containerd[2143]: time="2026-01-20T13:59:47.834612833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 13:59:48.068750 containerd[2143]: time="2026-01-20T13:59:48.068551963Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:59:48.071589 containerd[2143]: time="2026-01-20T13:59:48.071552261Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 13:59:48.072205 containerd[2143]: time="2026-01-20T13:59:48.071639048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 13:59:48.072462 kubelet[3709]: E0120 13:59:48.072420 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 13:59:48.073166 kubelet[3709]: E0120 13:59:48.072472 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 13:59:48.073166 kubelet[3709]: E0120 13:59:48.072585 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9rlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7dc74856cf-7w8mb_calico-apiserver(5c39ce3f-4fc4-445a-adab-faa3675de94e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 13:59:48.073946 kubelet[3709]: E0120 13:59:48.073855 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-7w8mb" podUID="5c39ce3f-4fc4-445a-adab-faa3675de94e" Jan 20 13:59:48.498213 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 13:59:48.498369 kernel: audit: type=1130 audit(1768917588.490:843): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.32:22-10.200.16.10:42420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:48.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.32:22-10.200.16.10:42420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:48.490626 systemd[1]: Started sshd@14-10.200.20.32:22-10.200.16.10:42420.service - OpenSSH per-connection server daemon (10.200.16.10:42420). Jan 20 13:59:48.925000 audit[5951]: USER_ACCT pid=5951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:48.926489 sshd[5951]: Accepted publickey for core from 10.200.16.10 port 42420 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 13:59:48.943925 sshd-session[5951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:59:48.942000 audit[5951]: CRED_ACQ pid=5951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:48.958566 kernel: audit: type=1101 audit(1768917588.925:844): pid=5951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:48.958654 kernel: audit: type=1103 audit(1768917588.942:845): pid=5951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:48.968566 kernel: audit: type=1006 audit(1768917588.942:846): pid=5951 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 20 13:59:48.942000 audit[5951]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff1c9df10 a2=3 a3=0 items=0 ppid=1 pid=5951 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:48.985364 kernel: audit: type=1300 audit(1768917588.942:846): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff1c9df10 a2=3 a3=0 items=0 ppid=1 pid=5951 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:48.942000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 13:59:48.993002 kernel: audit: type=1327 audit(1768917588.942:846): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 13:59:48.997169 systemd-logind[2119]: New session 18 of user core. Jan 20 13:59:49.001558 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 13:59:49.024000 audit[5951]: USER_START pid=5951 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:49.046246 kernel: audit: type=1105 audit(1768917589.024:847): pid=5951 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:49.046350 kernel: audit: type=1103 audit(1768917589.045:848): pid=5955 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:49.045000 audit[5955]: CRED_ACQ pid=5955 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:49.250499 sshd[5955]: Connection closed by 10.200.16.10 port 42420 Jan 20 13:59:49.251456 sshd-session[5951]: pam_unix(sshd:session): session closed for user core Jan 20 13:59:49.252000 audit[5951]: USER_END pid=5951 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:49.274586 systemd[1]: sshd@14-10.200.20.32:22-10.200.16.10:42420.service: Deactivated successfully. Jan 20 13:59:49.252000 audit[5951]: CRED_DISP pid=5951 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:49.276532 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 13:59:49.277363 systemd-logind[2119]: Session 18 logged out. Waiting for processes to exit. Jan 20 13:59:49.280012 systemd-logind[2119]: Removed session 18. Jan 20 13:59:49.289573 kernel: audit: type=1106 audit(1768917589.252:849): pid=5951 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:49.289616 kernel: audit: type=1104 audit(1768917589.252:850): pid=5951 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:49.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.32:22-10.200.16.10:42420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:49.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.32:22-10.200.16.10:42424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:49.332621 systemd[1]: Started sshd@15-10.200.20.32:22-10.200.16.10:42424.service - OpenSSH per-connection server daemon (10.200.16.10:42424). Jan 20 13:59:49.728000 audit[5967]: USER_ACCT pid=5967 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:49.729851 sshd[5967]: Accepted publickey for core from 10.200.16.10 port 42424 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 13:59:49.730000 audit[5967]: CRED_ACQ pid=5967 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:49.730000 audit[5967]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdcd9d690 a2=3 a3=0 items=0 ppid=1 pid=5967 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:49.730000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 13:59:49.731475 sshd-session[5967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:59:49.739264 systemd-logind[2119]: New session 19 of user core. Jan 20 13:59:49.745533 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 13:59:49.747000 audit[5967]: USER_START pid=5967 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:49.749000 audit[5971]: CRED_ACQ pid=5971 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:50.116775 sshd[5971]: Connection closed by 10.200.16.10 port 42424 Jan 20 13:59:50.116150 sshd-session[5967]: pam_unix(sshd:session): session closed for user core Jan 20 13:59:50.118000 audit[5967]: USER_END pid=5967 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:50.118000 audit[5967]: CRED_DISP pid=5967 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:50.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.32:22-10.200.16.10:42424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:50.121223 systemd[1]: sshd@15-10.200.20.32:22-10.200.16.10:42424.service: Deactivated successfully. Jan 20 13:59:50.123317 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 13:59:50.124949 systemd-logind[2119]: Session 19 logged out. Waiting for processes to exit. Jan 20 13:59:50.126622 systemd-logind[2119]: Removed session 19. Jan 20 13:59:50.206638 systemd[1]: Started sshd@16-10.200.20.32:22-10.200.16.10:59466.service - OpenSSH per-connection server daemon (10.200.16.10:59466). Jan 20 13:59:50.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.32:22-10.200.16.10:59466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:50.624000 audit[5981]: USER_ACCT pid=5981 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:50.625352 sshd[5981]: Accepted publickey for core from 10.200.16.10 port 59466 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 13:59:50.625000 audit[5981]: CRED_ACQ pid=5981 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:50.625000 audit[5981]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc2c205d0 a2=3 a3=0 items=0 ppid=1 pid=5981 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:50.625000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 13:59:50.626974 sshd-session[5981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:59:50.631167 systemd-logind[2119]: New session 20 of user core. Jan 20 13:59:50.636668 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 13:59:50.638000 audit[5981]: USER_START pid=5981 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:50.640000 audit[5985]: CRED_ACQ pid=5985 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:50.834311 containerd[2143]: time="2026-01-20T13:59:50.834067503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 13:59:50.835759 kubelet[3709]: E0120 13:59:50.835296 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77799d95d6-2jfd5" podUID="f1bb23d8-4b1f-4aa2-8b63-89adfd023536" Jan 20 13:59:51.220542 containerd[2143]: time="2026-01-20T13:59:51.220495963Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:59:51.223941 containerd[2143]: time="2026-01-20T13:59:51.223894490Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 13:59:51.224152 containerd[2143]: time="2026-01-20T13:59:51.223971308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 13:59:51.224272 kubelet[3709]: E0120 13:59:51.224199 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 13:59:51.224272 kubelet[3709]: E0120 13:59:51.224250 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 13:59:51.224476 kubelet[3709]: E0120 13:59:51.224438 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-szm87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vhqxx_calico-system(23535579-9237-4da3-a34f-0ccbbd6a2ee0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 13:59:51.224993 containerd[2143]: time="2026-01-20T13:59:51.224714291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 13:59:51.348000 audit[6000]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=6000 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:59:51.348000 audit[6000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffff2903f70 a2=0 a3=1 items=0 ppid=3859 pid=6000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:51.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:59:51.358000 audit[6000]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=6000 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:59:51.358000 audit[6000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff2903f70 a2=0 a3=1 items=0 ppid=3859 pid=6000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:51.358000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:59:51.379000 audit[6002]: NETFILTER_CFG table=filter:149 family=2 entries=38 op=nft_register_rule pid=6002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:59:51.379000 audit[6002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd9c3d450 a2=0 a3=1 items=0 ppid=3859 pid=6002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:51.379000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:59:51.389000 audit[6002]: NETFILTER_CFG table=nat:150 family=2 entries=20 op=nft_register_rule pid=6002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:59:51.389000 audit[6002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd9c3d450 a2=0 a3=1 items=0 ppid=3859 pid=6002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:51.389000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:59:51.450056 sshd[5985]: Connection closed by 10.200.16.10 port 59466 Jan 20 13:59:51.452583 sshd-session[5981]: pam_unix(sshd:session): session closed for user core Jan 20 13:59:51.455000 audit[5981]: USER_END pid=5981 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:51.455000 audit[5981]: CRED_DISP pid=5981 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:51.460005 systemd[1]: sshd@16-10.200.20.32:22-10.200.16.10:59466.service: Deactivated successfully. Jan 20 13:59:51.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.32:22-10.200.16.10:59466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:51.465350 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 13:59:51.468113 systemd-logind[2119]: Session 20 logged out. Waiting for processes to exit. Jan 20 13:59:51.471453 systemd-logind[2119]: Removed session 20. Jan 20 13:59:51.478714 containerd[2143]: time="2026-01-20T13:59:51.478673991Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:59:51.481783 containerd[2143]: time="2026-01-20T13:59:51.481747068Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 13:59:51.482439 containerd[2143]: time="2026-01-20T13:59:51.481808854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 13:59:51.482672 kubelet[3709]: E0120 13:59:51.482636 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 13:59:51.482782 kubelet[3709]: E0120 13:59:51.482766 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 13:59:51.483081 kubelet[3709]: E0120 13:59:51.483032 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftvn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9ztjh_calico-system(85bd9e66-9697-4ce6-b203-5bf288af5ea8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 13:59:51.484153 containerd[2143]: time="2026-01-20T13:59:51.483238289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 13:59:51.484310 kubelet[3709]: E0120 13:59:51.484289 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztjh" podUID="85bd9e66-9697-4ce6-b203-5bf288af5ea8" Jan 20 13:59:51.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.32:22-10.200.16.10:59470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:51.539637 systemd[1]: Started sshd@17-10.200.20.32:22-10.200.16.10:59470.service - OpenSSH per-connection server daemon (10.200.16.10:59470). Jan 20 13:59:51.800027 containerd[2143]: time="2026-01-20T13:59:51.799867086Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:59:51.804235 containerd[2143]: time="2026-01-20T13:59:51.804156343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 13:59:51.804493 containerd[2143]: time="2026-01-20T13:59:51.804172711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 13:59:51.804540 kubelet[3709]: E0120 13:59:51.804507 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 13:59:51.804652 kubelet[3709]: E0120 13:59:51.804559 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 13:59:51.805352 kubelet[3709]: E0120 13:59:51.805299 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-szm87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vhqxx_calico-system(23535579-9237-4da3-a34f-0ccbbd6a2ee0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 13:59:51.806785 kubelet[3709]: E0120 13:59:51.806649 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vhqxx" podUID="23535579-9237-4da3-a34f-0ccbbd6a2ee0" Jan 20 13:59:51.932000 audit[6007]: USER_ACCT pid=6007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:51.932695 sshd[6007]: Accepted publickey for core from 10.200.16.10 port 59470 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 13:59:51.933000 audit[6007]: CRED_ACQ pid=6007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:51.933000 audit[6007]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdbf01640 a2=3 a3=0 items=0 ppid=1 pid=6007 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:51.933000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 13:59:51.934121 sshd-session[6007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:59:51.938436 systemd-logind[2119]: New session 21 of user core. Jan 20 13:59:51.942597 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 13:59:51.945000 audit[6007]: USER_START pid=6007 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:51.947000 audit[6011]: CRED_ACQ pid=6011 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:52.300087 sshd[6011]: Connection closed by 10.200.16.10 port 59470 Jan 20 13:59:52.300889 sshd-session[6007]: pam_unix(sshd:session): session closed for user core Jan 20 13:59:52.301000 audit[6007]: USER_END pid=6007 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:52.301000 audit[6007]: CRED_DISP pid=6007 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:52.304022 systemd[1]: sshd@17-10.200.20.32:22-10.200.16.10:59470.service: Deactivated successfully. Jan 20 13:59:52.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.32:22-10.200.16.10:59470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:52.306035 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 13:59:52.307752 systemd-logind[2119]: Session 21 logged out. Waiting for processes to exit. Jan 20 13:59:52.309313 systemd-logind[2119]: Removed session 21. Jan 20 13:59:52.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.32:22-10.200.16.10:59486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:52.383508 systemd[1]: Started sshd@18-10.200.20.32:22-10.200.16.10:59486.service - OpenSSH per-connection server daemon (10.200.16.10:59486). Jan 20 13:59:52.802768 sshd[6044]: Accepted publickey for core from 10.200.16.10 port 59486 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 13:59:52.802000 audit[6044]: USER_ACCT pid=6044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:52.803000 audit[6044]: CRED_ACQ pid=6044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:52.803000 audit[6044]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe89c8050 a2=3 a3=0 items=0 ppid=1 pid=6044 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:52.803000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 13:59:52.804932 sshd-session[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:59:52.810611 systemd-logind[2119]: New session 22 of user core. Jan 20 13:59:52.813995 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 13:59:52.816000 audit[6044]: USER_START pid=6044 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:52.818000 audit[6049]: CRED_ACQ pid=6049 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:52.832781 containerd[2143]: time="2026-01-20T13:59:52.832691040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 13:59:53.084073 sshd[6049]: Connection closed by 10.200.16.10 port 59486 Jan 20 13:59:53.084875 sshd-session[6044]: pam_unix(sshd:session): session closed for user core Jan 20 13:59:53.086000 audit[6044]: USER_END pid=6044 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:53.086000 audit[6044]: CRED_DISP pid=6044 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:53.090185 systemd[1]: sshd@18-10.200.20.32:22-10.200.16.10:59486.service: Deactivated successfully. Jan 20 13:59:53.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.32:22-10.200.16.10:59486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:53.093633 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 13:59:53.095461 systemd-logind[2119]: Session 22 logged out. Waiting for processes to exit. Jan 20 13:59:53.097496 systemd-logind[2119]: Removed session 22. Jan 20 13:59:53.129068 containerd[2143]: time="2026-01-20T13:59:53.129018465Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:59:53.132991 containerd[2143]: time="2026-01-20T13:59:53.132942832Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 13:59:53.133079 containerd[2143]: time="2026-01-20T13:59:53.133030474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 13:59:53.133212 kubelet[3709]: E0120 13:59:53.133176 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 13:59:53.133768 kubelet[3709]: E0120 13:59:53.133222 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 13:59:53.133768 kubelet[3709]: E0120 13:59:53.133329 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wj6nr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68d86775db-xr92p_calico-system(d927ba29-b3e9-4c16-90f8-d925c765f0ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 13:59:53.134536 kubelet[3709]: E0120 13:59:53.134496 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68d86775db-xr92p" podUID="d927ba29-b3e9-4c16-90f8-d925c765f0ce" Jan 20 13:59:56.260460 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 20 13:59:56.260611 kernel: audit: type=1325 audit(1768917596.252:892): table=filter:151 family=2 entries=26 op=nft_register_rule pid=6074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:59:56.252000 audit[6074]: NETFILTER_CFG table=filter:151 family=2 entries=26 op=nft_register_rule pid=6074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:59:56.252000 audit[6074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc9601810 a2=0 a3=1 items=0 ppid=3859 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:56.282326 kernel: audit: type=1300 audit(1768917596.252:892): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc9601810 a2=0 a3=1 items=0 ppid=3859 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:56.252000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:59:56.291684 kernel: audit: type=1327 audit(1768917596.252:892): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:59:56.285000 audit[6074]: NETFILTER_CFG table=nat:152 family=2 entries=104 op=nft_register_chain pid=6074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:59:56.301349 kernel: audit: type=1325 audit(1768917596.285:893): table=nat:152 family=2 entries=104 op=nft_register_chain pid=6074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 13:59:56.285000 audit[6074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffc9601810 a2=0 a3=1 items=0 ppid=3859 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:56.319382 kernel: audit: type=1300 audit(1768917596.285:893): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffc9601810 a2=0 a3=1 items=0 ppid=3859 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:56.285000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:59:56.331331 kernel: audit: type=1327 audit(1768917596.285:893): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 13:59:56.833368 containerd[2143]: time="2026-01-20T13:59:56.833109117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 13:59:57.114175 containerd[2143]: time="2026-01-20T13:59:57.114098815Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 13:59:57.117325 containerd[2143]: time="2026-01-20T13:59:57.117223109Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 13:59:57.117325 containerd[2143]: time="2026-01-20T13:59:57.117278758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 13:59:57.117515 kubelet[3709]: E0120 13:59:57.117470 3709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 13:59:57.117812 kubelet[3709]: E0120 13:59:57.117521 3709 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 13:59:57.117812 kubelet[3709]: E0120 13:59:57.117658 3709 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7rhtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7dc74856cf-zx59p_calico-apiserver(9e279828-5731-43f9-9e3d-019d423f62e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 13:59:57.119194 kubelet[3709]: E0120 13:59:57.119037 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-zx59p" podUID="9e279828-5731-43f9-9e3d-019d423f62e9" Jan 20 13:59:58.181540 systemd[1]: Started sshd@19-10.200.20.32:22-10.200.16.10:59492.service - OpenSSH per-connection server daemon (10.200.16.10:59492). Jan 20 13:59:58.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.32:22-10.200.16.10:59492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:58.197405 kernel: audit: type=1130 audit(1768917598.181:894): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.32:22-10.200.16.10:59492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:58.609000 audit[6076]: USER_ACCT pid=6076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:58.618652 sshd[6076]: Accepted publickey for core from 10.200.16.10 port 59492 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 13:59:58.625000 audit[6076]: CRED_ACQ pid=6076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:58.627782 sshd-session[6076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 13:59:58.642010 kernel: audit: type=1101 audit(1768917598.609:895): pid=6076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:58.642091 kernel: audit: type=1103 audit(1768917598.625:896): pid=6076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:58.647200 systemd-logind[2119]: New session 23 of user core. Jan 20 13:59:58.652324 kernel: audit: type=1006 audit(1768917598.625:897): pid=6076 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 20 13:59:58.625000 audit[6076]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffccfe69d0 a2=3 a3=0 items=0 ppid=1 pid=6076 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 13:59:58.625000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 13:59:58.655567 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 13:59:58.658000 audit[6076]: USER_START pid=6076 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:58.659000 audit[6080]: CRED_ACQ pid=6080 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:58.899358 sshd[6080]: Connection closed by 10.200.16.10 port 59492 Jan 20 13:59:58.899867 sshd-session[6076]: pam_unix(sshd:session): session closed for user core Jan 20 13:59:58.899000 audit[6076]: USER_END pid=6076 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:58.899000 audit[6076]: CRED_DISP pid=6076 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 13:59:58.902626 systemd[1]: sshd@19-10.200.20.32:22-10.200.16.10:59492.service: Deactivated successfully. Jan 20 13:59:58.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.32:22-10.200.16.10:59492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 13:59:58.904932 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 13:59:58.908086 systemd-logind[2119]: Session 23 logged out. Waiting for processes to exit. Jan 20 13:59:58.909431 systemd-logind[2119]: Removed session 23. Jan 20 14:00:01.833802 kubelet[3709]: E0120 14:00:01.833531 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-7w8mb" podUID="5c39ce3f-4fc4-445a-adab-faa3675de94e" Jan 20 14:00:03.845243 kubelet[3709]: E0120 14:00:03.845191 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztjh" podUID="85bd9e66-9697-4ce6-b203-5bf288af5ea8" Jan 20 14:00:03.988623 systemd[1]: Started sshd@20-10.200.20.32:22-10.200.16.10:33866.service - OpenSSH per-connection server daemon (10.200.16.10:33866). Jan 20 14:00:03.996410 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 20 14:00:03.996490 kernel: audit: type=1130 audit(1768917603.987:903): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.32:22-10.200.16.10:33866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:00:03.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.32:22-10.200.16.10:33866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:00:04.423000 audit[6092]: USER_ACCT pid=6092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:04.442273 sshd[6092]: Accepted publickey for core from 10.200.16.10 port 33866 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 14:00:04.443227 sshd-session[6092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 14:00:04.441000 audit[6092]: CRED_ACQ pid=6092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:04.459619 kernel: audit: type=1101 audit(1768917604.423:904): pid=6092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:04.459689 kernel: audit: type=1103 audit(1768917604.441:905): pid=6092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:04.470025 kernel: audit: type=1006 audit(1768917604.441:906): pid=6092 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 20 14:00:04.441000 audit[6092]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc91b3a80 a2=3 a3=0 items=0 ppid=1 pid=6092 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 14:00:04.472715 systemd-logind[2119]: New session 24 of user core. Jan 20 14:00:04.488300 kernel: audit: type=1300 audit(1768917604.441:906): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc91b3a80 a2=3 a3=0 items=0 ppid=1 pid=6092 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 14:00:04.496104 kernel: audit: type=1327 audit(1768917604.441:906): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 14:00:04.441000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 14:00:04.496718 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 14:00:04.499000 audit[6092]: USER_START pid=6092 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:04.520932 kernel: audit: type=1105 audit(1768917604.499:907): pid=6092 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:04.521000 audit[6096]: CRED_ACQ pid=6096 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:04.537926 kernel: audit: type=1103 audit(1768917604.521:908): pid=6096 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:04.726375 sshd[6096]: Connection closed by 10.200.16.10 port 33866 Jan 20 14:00:04.727462 sshd-session[6092]: pam_unix(sshd:session): session closed for user core Jan 20 14:00:04.727000 audit[6092]: USER_END pid=6092 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:04.749649 systemd[1]: sshd@20-10.200.20.32:22-10.200.16.10:33866.service: Deactivated successfully. Jan 20 14:00:04.751243 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 14:00:04.759182 kernel: audit: type=1106 audit(1768917604.727:909): pid=6092 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:04.759246 kernel: audit: type=1104 audit(1768917604.727:910): pid=6092 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:04.727000 audit[6092]: CRED_DISP pid=6092 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:04.756760 systemd-logind[2119]: Session 24 logged out. Waiting for processes to exit. Jan 20 14:00:04.758368 systemd-logind[2119]: Removed session 24. Jan 20 14:00:04.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.32:22-10.200.16.10:33866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:00:04.833275 kubelet[3709]: E0120 14:00:04.832746 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77799d95d6-2jfd5" podUID="f1bb23d8-4b1f-4aa2-8b63-89adfd023536" Jan 20 14:00:06.834890 kubelet[3709]: E0120 14:00:06.834650 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vhqxx" podUID="23535579-9237-4da3-a34f-0ccbbd6a2ee0" Jan 20 14:00:07.832142 kubelet[3709]: E0120 14:00:07.832022 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68d86775db-xr92p" podUID="d927ba29-b3e9-4c16-90f8-d925c765f0ce" Jan 20 14:00:09.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.32:22-10.200.16.10:57998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:00:09.816625 systemd[1]: Started sshd@21-10.200.20.32:22-10.200.16.10:57998.service - OpenSSH per-connection server daemon (10.200.16.10:57998). Jan 20 14:00:09.819650 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 14:00:09.819748 kernel: audit: type=1130 audit(1768917609.815:912): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.32:22-10.200.16.10:57998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:00:10.252754 sshd[6107]: Accepted publickey for core from 10.200.16.10 port 57998 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 14:00:10.251000 audit[6107]: USER_ACCT pid=6107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:10.254166 sshd-session[6107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 14:00:10.252000 audit[6107]: CRED_ACQ pid=6107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:10.278062 systemd-logind[2119]: New session 25 of user core. Jan 20 14:00:10.289415 kernel: audit: type=1101 audit(1768917610.251:913): pid=6107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:10.289495 kernel: audit: type=1103 audit(1768917610.252:914): pid=6107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:10.294731 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 14:00:10.304855 kernel: audit: type=1006 audit(1768917610.252:915): pid=6107 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 20 14:00:10.252000 audit[6107]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeaa16900 a2=3 a3=0 items=0 ppid=1 pid=6107 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 14:00:10.328425 kernel: audit: type=1300 audit(1768917610.252:915): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeaa16900 a2=3 a3=0 items=0 ppid=1 pid=6107 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 14:00:10.252000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 14:00:10.335905 kernel: audit: type=1327 audit(1768917610.252:915): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 14:00:10.305000 audit[6107]: USER_START pid=6107 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:10.354749 kernel: audit: type=1105 audit(1768917610.305:916): pid=6107 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:10.308000 audit[6111]: CRED_ACQ pid=6111 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:10.369667 kernel: audit: type=1103 audit(1768917610.308:917): pid=6111 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:10.534871 sshd[6111]: Connection closed by 10.200.16.10 port 57998 Jan 20 14:00:10.535569 sshd-session[6107]: pam_unix(sshd:session): session closed for user core Jan 20 14:00:10.535000 audit[6107]: USER_END pid=6107 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:10.540604 systemd-logind[2119]: Session 25 logged out. Waiting for processes to exit. Jan 20 14:00:10.543973 systemd[1]: sshd@21-10.200.20.32:22-10.200.16.10:57998.service: Deactivated successfully. Jan 20 14:00:10.550205 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 14:00:10.556182 systemd-logind[2119]: Removed session 25. Jan 20 14:00:10.535000 audit[6107]: CRED_DISP pid=6107 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:10.574568 kernel: audit: type=1106 audit(1768917610.535:918): pid=6107 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:10.574666 kernel: audit: type=1104 audit(1768917610.535:919): pid=6107 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:10.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.32:22-10.200.16.10:57998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:00:10.832399 kubelet[3709]: E0120 14:00:10.831919 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-zx59p" podUID="9e279828-5731-43f9-9e3d-019d423f62e9" Jan 20 14:00:12.832373 kubelet[3709]: E0120 14:00:12.832089 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-7w8mb" podUID="5c39ce3f-4fc4-445a-adab-faa3675de94e" Jan 20 14:00:15.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.32:22-10.200.16.10:58008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:00:15.623251 systemd[1]: Started sshd@22-10.200.20.32:22-10.200.16.10:58008.service - OpenSSH per-connection server daemon (10.200.16.10:58008). Jan 20 14:00:15.645140 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 14:00:15.645229 kernel: audit: type=1130 audit(1768917615.621:921): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.32:22-10.200.16.10:58008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:00:16.089000 audit[6126]: USER_ACCT pid=6126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:16.090842 sshd[6126]: Accepted publickey for core from 10.200.16.10 port 58008 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 14:00:16.106000 audit[6126]: CRED_ACQ pid=6126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:16.109856 sshd-session[6126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 14:00:16.122501 kernel: audit: type=1101 audit(1768917616.089:922): pid=6126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:16.122579 kernel: audit: type=1103 audit(1768917616.106:923): pid=6126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:16.129249 systemd-logind[2119]: New session 26 of user core. Jan 20 14:00:16.136681 kernel: audit: type=1006 audit(1768917616.106:924): pid=6126 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 20 14:00:16.106000 audit[6126]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe6a8e160 a2=3 a3=0 items=0 ppid=1 pid=6126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 14:00:16.153219 kernel: audit: type=1300 audit(1768917616.106:924): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe6a8e160 a2=3 a3=0 items=0 ppid=1 pid=6126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 14:00:16.106000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 14:00:16.155569 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 14:00:16.159889 kernel: audit: type=1327 audit(1768917616.106:924): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 14:00:16.160000 audit[6126]: USER_START pid=6126 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:16.180000 audit[6130]: CRED_ACQ pid=6130 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:16.195003 kernel: audit: type=1105 audit(1768917616.160:925): pid=6126 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:16.195121 kernel: audit: type=1103 audit(1768917616.180:926): pid=6130 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:16.384408 sshd[6130]: Connection closed by 10.200.16.10 port 58008 Jan 20 14:00:16.384894 sshd-session[6126]: pam_unix(sshd:session): session closed for user core Jan 20 14:00:16.386000 audit[6126]: USER_END pid=6126 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:16.389905 systemd-logind[2119]: Session 26 logged out. Waiting for processes to exit. Jan 20 14:00:16.392281 systemd[1]: sshd@22-10.200.20.32:22-10.200.16.10:58008.service: Deactivated successfully. Jan 20 14:00:16.395340 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 14:00:16.397970 systemd-logind[2119]: Removed session 26. Jan 20 14:00:16.386000 audit[6126]: CRED_DISP pid=6126 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:16.422364 kernel: audit: type=1106 audit(1768917616.386:927): pid=6126 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:16.422445 kernel: audit: type=1104 audit(1768917616.386:928): pid=6126 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:16.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.32:22-10.200.16.10:58008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:00:16.832528 kubelet[3709]: E0120 14:00:16.832304 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztjh" podUID="85bd9e66-9697-4ce6-b203-5bf288af5ea8" Jan 20 14:00:17.832857 kubelet[3709]: E0120 14:00:17.832573 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77799d95d6-2jfd5" podUID="f1bb23d8-4b1f-4aa2-8b63-89adfd023536" Jan 20 14:00:18.833039 kubelet[3709]: E0120 14:00:18.832990 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68d86775db-xr92p" podUID="d927ba29-b3e9-4c16-90f8-d925c765f0ce" Jan 20 14:00:20.833348 kubelet[3709]: E0120 14:00:20.833273 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vhqxx" podUID="23535579-9237-4da3-a34f-0ccbbd6a2ee0" Jan 20 14:00:21.482462 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 14:00:21.482607 kernel: audit: type=1130 audit(1768917621.469:930): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.32:22-10.200.16.10:41118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:00:21.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.32:22-10.200.16.10:41118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:00:21.469309 systemd[1]: Started sshd@23-10.200.20.32:22-10.200.16.10:41118.service - OpenSSH per-connection server daemon (10.200.16.10:41118). Jan 20 14:00:21.884000 audit[6143]: USER_ACCT pid=6143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:21.903902 sshd[6143]: Accepted publickey for core from 10.200.16.10 port 41118 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 14:00:21.904922 sshd-session[6143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 14:00:21.903000 audit[6143]: CRED_ACQ pid=6143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:21.920481 kernel: audit: type=1101 audit(1768917621.884:931): pid=6143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:21.920605 kernel: audit: type=1103 audit(1768917621.903:932): pid=6143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:21.930512 systemd-logind[2119]: New session 27 of user core. Jan 20 14:00:21.936423 kernel: audit: type=1006 audit(1768917621.903:933): pid=6143 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 20 14:00:21.903000 audit[6143]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd2d4c870 a2=3 a3=0 items=0 ppid=1 pid=6143 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 14:00:21.954628 kernel: audit: type=1300 audit(1768917621.903:933): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd2d4c870 a2=3 a3=0 items=0 ppid=1 pid=6143 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 14:00:21.903000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 14:00:21.961735 kernel: audit: type=1327 audit(1768917621.903:933): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 14:00:21.962882 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 14:00:21.966000 audit[6143]: USER_START pid=6143 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:21.989000 audit[6147]: CRED_ACQ pid=6147 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:21.991413 kernel: audit: type=1105 audit(1768917621.966:934): pid=6143 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:22.007414 kernel: audit: type=1103 audit(1768917621.989:935): pid=6147 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:22.197045 sshd[6147]: Connection closed by 10.200.16.10 port 41118 Jan 20 14:00:22.196811 sshd-session[6143]: pam_unix(sshd:session): session closed for user core Jan 20 14:00:22.199000 audit[6143]: USER_END pid=6143 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:22.204705 systemd[1]: sshd@23-10.200.20.32:22-10.200.16.10:41118.service: Deactivated successfully. Jan 20 14:00:22.209436 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 14:00:22.211154 systemd-logind[2119]: Session 27 logged out. Waiting for processes to exit. Jan 20 14:00:22.213674 systemd-logind[2119]: Removed session 27. Jan 20 14:00:22.199000 audit[6143]: CRED_DISP pid=6143 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:22.237444 kernel: audit: type=1106 audit(1768917622.199:936): pid=6143 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:22.237539 kernel: audit: type=1104 audit(1768917622.199:937): pid=6143 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:22.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.32:22-10.200.16.10:41118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:00:24.832729 kubelet[3709]: E0120 14:00:24.832487 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-7w8mb" podUID="5c39ce3f-4fc4-445a-adab-faa3675de94e" Jan 20 14:00:25.833607 kubelet[3709]: E0120 14:00:25.833239 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dc74856cf-zx59p" podUID="9e279828-5731-43f9-9e3d-019d423f62e9" Jan 20 14:00:27.286286 systemd[1]: Started sshd@24-10.200.20.32:22-10.200.16.10:41130.service - OpenSSH per-connection server daemon (10.200.16.10:41130). Jan 20 14:00:27.305518 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 14:00:27.305757 kernel: audit: type=1130 audit(1768917627.286:939): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.32:22-10.200.16.10:41130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:00:27.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.32:22-10.200.16.10:41130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:00:27.720000 audit[6186]: USER_ACCT pid=6186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:27.738112 sshd[6186]: Accepted publickey for core from 10.200.16.10 port 41130 ssh2: RSA SHA256:8oRp7fHfsMbr5tiGU4DdFYKNuE3Q7/G5fCohEz31AuI Jan 20 14:00:27.738160 sshd-session[6186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 14:00:27.736000 audit[6186]: CRED_ACQ pid=6186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:27.756041 kernel: audit: type=1101 audit(1768917627.720:940): pid=6186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:27.756149 kernel: audit: type=1103 audit(1768917627.736:941): pid=6186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:27.768229 kernel: audit: type=1006 audit(1768917627.736:942): pid=6186 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 20 14:00:27.736000 audit[6186]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff519b6f0 a2=3 a3=0 items=0 ppid=1 pid=6186 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 14:00:27.772450 systemd-logind[2119]: New session 28 of user core. Jan 20 14:00:27.736000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 14:00:27.796346 kernel: audit: type=1300 audit(1768917627.736:942): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff519b6f0 a2=3 a3=0 items=0 ppid=1 pid=6186 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 14:00:27.796442 kernel: audit: type=1327 audit(1768917627.736:942): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 14:00:27.796752 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 14:00:27.821000 audit[6186]: USER_START pid=6186 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:27.842000 audit[6190]: CRED_ACQ pid=6190 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:27.862431 kernel: audit: type=1105 audit(1768917627.821:943): pid=6186 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:27.862545 kernel: audit: type=1103 audit(1768917627.842:944): pid=6190 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:28.081703 sshd[6190]: Connection closed by 10.200.16.10 port 41130 Jan 20 14:00:28.080381 sshd-session[6186]: pam_unix(sshd:session): session closed for user core Jan 20 14:00:28.082000 audit[6186]: USER_END pid=6186 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:28.107500 kernel: audit: type=1106 audit(1768917628.082:945): pid=6186 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:28.107679 systemd[1]: sshd@24-10.200.20.32:22-10.200.16.10:41130.service: Deactivated successfully. Jan 20 14:00:28.110997 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 14:00:28.082000 audit[6186]: CRED_DISP pid=6186 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:28.133454 kernel: audit: type=1104 audit(1768917628.082:946): pid=6186 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 14:00:28.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.32:22-10.200.16.10:41130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:00:28.132040 systemd-logind[2119]: Session 28 logged out. Waiting for processes to exit. Jan 20 14:00:28.133150 systemd-logind[2119]: Removed session 28. Jan 20 14:00:29.836105 kubelet[3709]: E0120 14:00:29.836053 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztjh" podUID="85bd9e66-9697-4ce6-b203-5bf288af5ea8" Jan 20 14:00:29.836690 kubelet[3709]: E0120 14:00:29.836656 3709 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77799d95d6-2jfd5" podUID="f1bb23d8-4b1f-4aa2-8b63-89adfd023536"