Jan 21 23:38:04.179754 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jan 21 23:38:04.179771 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Wed Jan 21 22:02:38 -00 2026 Jan 21 23:38:04.179778 kernel: KASLR enabled Jan 21 23:38:04.179782 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 21 23:38:04.179787 kernel: printk: legacy bootconsole [pl11] enabled Jan 21 23:38:04.179792 kernel: efi: EFI v2.7 by EDK II Jan 21 23:38:04.179797 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Jan 21 23:38:04.179801 kernel: random: crng init done Jan 21 23:38:04.179805 kernel: secureboot: Secure boot disabled Jan 21 23:38:04.179809 kernel: ACPI: Early table checksum verification disabled Jan 21 23:38:04.179813 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Jan 21 23:38:04.179817 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 21 23:38:04.179822 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 21 23:38:04.179827 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 21 23:38:04.179832 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 21 23:38:04.179837 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 21 23:38:04.179841 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 21 23:38:04.179846 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 21 23:38:04.179851 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 21 23:38:04.179855 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 21 23:38:04.179860 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 21 23:38:04.179864 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 21 23:38:04.179868 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 21 23:38:04.179873 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 21 23:38:04.179877 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 21 23:38:04.179882 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jan 21 23:38:04.179886 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jan 21 23:38:04.179891 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 21 23:38:04.179896 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 21 23:38:04.179900 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 21 23:38:04.179905 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 21 23:38:04.179909 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 21 23:38:04.179913 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 21 23:38:04.179918 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 21 23:38:04.179922 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 21 23:38:04.179926 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 21 23:38:04.179931 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jan 21 23:38:04.179935 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Jan 21 23:38:04.179941 kernel: Zone ranges: Jan 21 23:38:04.179945 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 21 23:38:04.179952 kernel: DMA32 empty Jan 21 23:38:04.179956 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 21 23:38:04.179961 kernel: Device empty Jan 21 23:38:04.179967 kernel: Movable zone start for each node Jan 21 23:38:04.179971 kernel: Early memory node ranges Jan 21 23:38:04.179976 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 21 23:38:04.179981 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Jan 21 23:38:04.179985 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Jan 21 23:38:04.179990 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Jan 21 23:38:04.179994 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Jan 21 23:38:04.179999 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Jan 21 23:38:04.180004 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 21 23:38:04.180009 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 21 23:38:04.180014 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 21 23:38:04.180019 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Jan 21 23:38:04.180023 kernel: psci: probing for conduit method from ACPI. Jan 21 23:38:04.180028 kernel: psci: PSCIv1.3 detected in firmware. Jan 21 23:38:04.180032 kernel: psci: Using standard PSCI v0.2 function IDs Jan 21 23:38:04.180037 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 21 23:38:04.182065 kernel: psci: SMC Calling Convention v1.4 Jan 21 23:38:04.182089 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 21 23:38:04.182096 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 21 23:38:04.182102 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 21 23:38:04.182107 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 21 23:38:04.182117 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 21 23:38:04.182121 kernel: Detected PIPT I-cache on CPU0 Jan 21 23:38:04.182126 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jan 21 23:38:04.182131 kernel: CPU features: detected: GIC system register CPU interface Jan 21 23:38:04.182136 kernel: CPU features: detected: Spectre-v4 Jan 21 23:38:04.182140 kernel: CPU features: detected: Spectre-BHB Jan 21 23:38:04.182145 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 21 23:38:04.182150 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 21 23:38:04.182155 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jan 21 23:38:04.182159 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 21 23:38:04.182165 kernel: alternatives: applying boot alternatives Jan 21 23:38:04.182171 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ca60929099aca00ce2f86d3c34ded0cbc27315310cbe1bd1d91f864aae71550e Jan 21 23:38:04.182176 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 21 23:38:04.182181 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 21 23:38:04.182186 kernel: Fallback order for Node 0: 0 Jan 21 23:38:04.182191 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jan 21 23:38:04.182195 kernel: Policy zone: Normal Jan 21 23:38:04.182200 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 21 23:38:04.182205 kernel: software IO TLB: area num 2. Jan 21 23:38:04.182209 kernel: software IO TLB: mapped [mem 0x0000000037380000-0x000000003b380000] (64MB) Jan 21 23:38:04.182214 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 21 23:38:04.182220 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 21 23:38:04.182226 kernel: rcu: RCU event tracing is enabled. Jan 21 23:38:04.182231 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 21 23:38:04.182235 kernel: Trampoline variant of Tasks RCU enabled. Jan 21 23:38:04.182240 kernel: Tracing variant of Tasks RCU enabled. Jan 21 23:38:04.182245 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 21 23:38:04.182250 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 21 23:38:04.182254 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 21 23:38:04.182259 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 21 23:38:04.182264 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 21 23:38:04.182269 kernel: GICv3: 960 SPIs implemented Jan 21 23:38:04.182274 kernel: GICv3: 0 Extended SPIs implemented Jan 21 23:38:04.182279 kernel: Root IRQ handler: gic_handle_irq Jan 21 23:38:04.182284 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 21 23:38:04.182288 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jan 21 23:38:04.182293 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 21 23:38:04.182298 kernel: ITS: No ITS available, not enabling LPIs Jan 21 23:38:04.182303 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 21 23:38:04.182308 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jan 21 23:38:04.182313 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 21 23:38:04.182318 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jan 21 23:38:04.182322 kernel: Console: colour dummy device 80x25 Jan 21 23:38:04.182329 kernel: printk: legacy console [tty1] enabled Jan 21 23:38:04.182334 kernel: ACPI: Core revision 20240827 Jan 21 23:38:04.182339 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jan 21 23:38:04.182344 kernel: pid_max: default: 32768 minimum: 301 Jan 21 23:38:04.182349 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 21 23:38:04.182354 kernel: landlock: Up and running. Jan 21 23:38:04.182359 kernel: SELinux: Initializing. Jan 21 23:38:04.182365 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 21 23:38:04.182370 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 21 23:38:04.182375 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Jan 21 23:38:04.182380 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Jan 21 23:38:04.182389 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 21 23:38:04.182395 kernel: rcu: Hierarchical SRCU implementation. Jan 21 23:38:04.182400 kernel: rcu: Max phase no-delay instances is 400. Jan 21 23:38:04.182405 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 21 23:38:04.182410 kernel: Remapping and enabling EFI services. Jan 21 23:38:04.182416 kernel: smp: Bringing up secondary CPUs ... Jan 21 23:38:04.182421 kernel: Detected PIPT I-cache on CPU1 Jan 21 23:38:04.182427 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 21 23:38:04.182432 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jan 21 23:38:04.182438 kernel: smp: Brought up 1 node, 2 CPUs Jan 21 23:38:04.182443 kernel: SMP: Total of 2 processors activated. Jan 21 23:38:04.182448 kernel: CPU: All CPU(s) started at EL1 Jan 21 23:38:04.182453 kernel: CPU features: detected: 32-bit EL0 Support Jan 21 23:38:04.182459 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 21 23:38:04.182464 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 21 23:38:04.182469 kernel: CPU features: detected: Common not Private translations Jan 21 23:38:04.182476 kernel: CPU features: detected: CRC32 instructions Jan 21 23:38:04.182481 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jan 21 23:38:04.182486 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 21 23:38:04.182492 kernel: CPU features: detected: LSE atomic instructions Jan 21 23:38:04.182497 kernel: CPU features: detected: Privileged Access Never Jan 21 23:38:04.182502 kernel: CPU features: detected: Speculation barrier (SB) Jan 21 23:38:04.182507 kernel: CPU features: detected: TLB range maintenance instructions Jan 21 23:38:04.182514 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 21 23:38:04.182519 kernel: CPU features: detected: Scalable Vector Extension Jan 21 23:38:04.182524 kernel: alternatives: applying system-wide alternatives Jan 21 23:38:04.182530 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 21 23:38:04.182535 kernel: SVE: maximum available vector length 16 bytes per vector Jan 21 23:38:04.182540 kernel: SVE: default vector length 16 bytes per vector Jan 21 23:38:04.182546 kernel: Memory: 3979964K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 12416K init, 1038K bss, 193008K reserved, 16384K cma-reserved) Jan 21 23:38:04.182552 kernel: devtmpfs: initialized Jan 21 23:38:04.182557 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 21 23:38:04.182563 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 21 23:38:04.182568 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 21 23:38:04.182573 kernel: 0 pages in range for non-PLT usage Jan 21 23:38:04.182578 kernel: 515184 pages in range for PLT usage Jan 21 23:38:04.182584 kernel: pinctrl core: initialized pinctrl subsystem Jan 21 23:38:04.182590 kernel: SMBIOS 3.1.0 present. Jan 21 23:38:04.182595 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Jan 21 23:38:04.182600 kernel: DMI: Memory slots populated: 2/2 Jan 21 23:38:04.182605 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 21 23:38:04.182611 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 21 23:38:04.182616 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 21 23:38:04.182621 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 21 23:38:04.182627 kernel: audit: initializing netlink subsys (disabled) Jan 21 23:38:04.182633 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jan 21 23:38:04.182638 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 21 23:38:04.182643 kernel: cpuidle: using governor menu Jan 21 23:38:04.182648 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 21 23:38:04.182654 kernel: ASID allocator initialised with 32768 entries Jan 21 23:38:04.182659 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 21 23:38:04.182664 kernel: Serial: AMBA PL011 UART driver Jan 21 23:38:04.182670 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 21 23:38:04.182676 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 21 23:38:04.182681 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 21 23:38:04.182686 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 21 23:38:04.182691 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 21 23:38:04.182697 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 21 23:38:04.182702 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 21 23:38:04.182708 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 21 23:38:04.182713 kernel: ACPI: Added _OSI(Module Device) Jan 21 23:38:04.182718 kernel: ACPI: Added _OSI(Processor Device) Jan 21 23:38:04.182723 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 21 23:38:04.182728 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 21 23:38:04.182734 kernel: ACPI: Interpreter enabled Jan 21 23:38:04.182739 kernel: ACPI: Using GIC for interrupt routing Jan 21 23:38:04.182745 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 21 23:38:04.182750 kernel: printk: legacy console [ttyAMA0] enabled Jan 21 23:38:04.182756 kernel: printk: legacy bootconsole [pl11] disabled Jan 21 23:38:04.182761 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 21 23:38:04.182766 kernel: ACPI: CPU0 has been hot-added Jan 21 23:38:04.182772 kernel: ACPI: CPU1 has been hot-added Jan 21 23:38:04.182777 kernel: iommu: Default domain type: Translated Jan 21 23:38:04.182783 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 21 23:38:04.182788 kernel: efivars: Registered efivars operations Jan 21 23:38:04.182793 kernel: vgaarb: loaded Jan 21 23:38:04.182798 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 21 23:38:04.182803 kernel: VFS: Disk quotas dquot_6.6.0 Jan 21 23:38:04.182808 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 21 23:38:04.182814 kernel: pnp: PnP ACPI init Jan 21 23:38:04.182820 kernel: pnp: PnP ACPI: found 0 devices Jan 21 23:38:04.182825 kernel: NET: Registered PF_INET protocol family Jan 21 23:38:04.182830 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 21 23:38:04.182835 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 21 23:38:04.182841 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 21 23:38:04.182846 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 21 23:38:04.182851 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 21 23:38:04.182857 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 21 23:38:04.182863 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 21 23:38:04.182868 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 21 23:38:04.182873 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 21 23:38:04.182879 kernel: PCI: CLS 0 bytes, default 64 Jan 21 23:38:04.182884 kernel: kvm [1]: HYP mode not available Jan 21 23:38:04.182889 kernel: Initialise system trusted keyrings Jan 21 23:38:04.182894 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 21 23:38:04.182900 kernel: Key type asymmetric registered Jan 21 23:38:04.182905 kernel: Asymmetric key parser 'x509' registered Jan 21 23:38:04.182910 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 21 23:38:04.182916 kernel: io scheduler mq-deadline registered Jan 21 23:38:04.182921 kernel: io scheduler kyber registered Jan 21 23:38:04.182926 kernel: io scheduler bfq registered Jan 21 23:38:04.182931 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 21 23:38:04.182937 kernel: thunder_xcv, ver 1.0 Jan 21 23:38:04.182943 kernel: thunder_bgx, ver 1.0 Jan 21 23:38:04.182948 kernel: nicpf, ver 1.0 Jan 21 23:38:04.182953 kernel: nicvf, ver 1.0 Jan 21 23:38:04.183124 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 21 23:38:04.183197 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-21T23:38:02 UTC (1769038682) Jan 21 23:38:04.183206 kernel: efifb: probing for efifb Jan 21 23:38:04.183211 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 21 23:38:04.183217 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 21 23:38:04.183222 kernel: efifb: scrolling: redraw Jan 21 23:38:04.183227 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 21 23:38:04.183232 kernel: Console: switching to colour frame buffer device 128x48 Jan 21 23:38:04.183237 kernel: fb0: EFI VGA frame buffer device Jan 21 23:38:04.183243 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 21 23:38:04.183249 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 21 23:38:04.183254 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 21 23:38:04.183259 kernel: NET: Registered PF_INET6 protocol family Jan 21 23:38:04.183264 kernel: watchdog: NMI not fully supported Jan 21 23:38:04.183270 kernel: watchdog: Hard watchdog permanently disabled Jan 21 23:38:04.183275 kernel: Segment Routing with IPv6 Jan 21 23:38:04.183281 kernel: In-situ OAM (IOAM) with IPv6 Jan 21 23:38:04.183286 kernel: NET: Registered PF_PACKET protocol family Jan 21 23:38:04.183291 kernel: Key type dns_resolver registered Jan 21 23:38:04.183296 kernel: registered taskstats version 1 Jan 21 23:38:04.183301 kernel: Loading compiled-in X.509 certificates Jan 21 23:38:04.183307 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 665f7ea56fc50c946d7b42db233309a1abf7475f' Jan 21 23:38:04.183312 kernel: Demotion targets for Node 0: null Jan 21 23:38:04.183318 kernel: Key type .fscrypt registered Jan 21 23:38:04.183323 kernel: Key type fscrypt-provisioning registered Jan 21 23:38:04.183328 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 21 23:38:04.183333 kernel: ima: Allocated hash algorithm: sha1 Jan 21 23:38:04.183338 kernel: ima: No architecture policies found Jan 21 23:38:04.183344 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 21 23:38:04.183349 kernel: clk: Disabling unused clocks Jan 21 23:38:04.183354 kernel: PM: genpd: Disabling unused power domains Jan 21 23:38:04.183360 kernel: Freeing unused kernel memory: 12416K Jan 21 23:38:04.183365 kernel: Run /init as init process Jan 21 23:38:04.183370 kernel: with arguments: Jan 21 23:38:04.183376 kernel: /init Jan 21 23:38:04.183381 kernel: with environment: Jan 21 23:38:04.183386 kernel: HOME=/ Jan 21 23:38:04.183391 kernel: TERM=linux Jan 21 23:38:04.183397 kernel: hv_vmbus: Vmbus version:5.3 Jan 21 23:38:04.183402 kernel: hv_vmbus: registering driver hid_hyperv Jan 21 23:38:04.183407 kernel: SCSI subsystem initialized Jan 21 23:38:04.183412 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 21 23:38:04.183497 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 21 23:38:04.183504 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 21 23:38:04.183511 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 21 23:38:04.183516 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 21 23:38:04.183522 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 21 23:38:04.183527 kernel: PTP clock support registered Jan 21 23:38:04.183532 kernel: hv_utils: Registering HyperV Utility Driver Jan 21 23:38:04.183537 kernel: hv_vmbus: registering driver hv_utils Jan 21 23:38:04.183542 kernel: hv_utils: Heartbeat IC version 3.0 Jan 21 23:38:04.183549 kernel: hv_utils: Shutdown IC version 3.2 Jan 21 23:38:04.183554 kernel: hv_utils: TimeSync IC version 4.0 Jan 21 23:38:04.183559 kernel: hv_vmbus: registering driver hv_storvsc Jan 21 23:38:04.183650 kernel: scsi host1: storvsc_host_t Jan 21 23:38:04.183727 kernel: scsi host0: storvsc_host_t Jan 21 23:38:04.183816 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 21 23:38:04.183899 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 21 23:38:04.183973 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 21 23:38:04.184061 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 21 23:38:04.184142 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 21 23:38:04.184216 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 21 23:38:04.184288 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 21 23:38:04.184369 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#257 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 21 23:38:04.184438 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 21 23:38:04.184445 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 21 23:38:04.184517 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 21 23:38:04.184591 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 21 23:38:04.184599 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 21 23:38:04.184605 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 21 23:38:04.184610 kernel: device-mapper: uevent: version 1.0.3 Jan 21 23:38:04.184615 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 21 23:38:04.184687 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 21 23:38:04.184694 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 21 23:38:04.184699 kernel: raid6: neonx8 gen() 18547 MB/s Jan 21 23:38:04.184705 kernel: raid6: neonx4 gen() 18584 MB/s Jan 21 23:38:04.184710 kernel: raid6: neonx2 gen() 17094 MB/s Jan 21 23:38:04.184715 kernel: raid6: neonx1 gen() 15019 MB/s Jan 21 23:38:04.184721 kernel: raid6: int64x8 gen() 10560 MB/s Jan 21 23:38:04.184726 kernel: raid6: int64x4 gen() 10618 MB/s Jan 21 23:38:04.184731 kernel: raid6: int64x2 gen() 8980 MB/s Jan 21 23:38:04.184736 kernel: raid6: int64x1 gen() 6999 MB/s Jan 21 23:38:04.184742 kernel: raid6: using algorithm neonx4 gen() 18584 MB/s Jan 21 23:38:04.184748 kernel: raid6: .... xor() 15123 MB/s, rmw enabled Jan 21 23:38:04.184753 kernel: raid6: using neon recovery algorithm Jan 21 23:38:04.184758 kernel: xor: measuring software checksum speed Jan 21 23:38:04.184763 kernel: 8regs : 28631 MB/sec Jan 21 23:38:04.184768 kernel: 32regs : 28379 MB/sec Jan 21 23:38:04.184773 kernel: arm64_neon : 37384 MB/sec Jan 21 23:38:04.184779 kernel: xor: using function: arm64_neon (37384 MB/sec) Jan 21 23:38:04.184785 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 21 23:38:04.184790 kernel: BTRFS: device fsid 297897fd-6303-44b2-8c75-36ebd35c694f devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (397) Jan 21 23:38:04.184796 kernel: BTRFS info (device dm-0): first mount of filesystem 297897fd-6303-44b2-8c75-36ebd35c694f Jan 21 23:38:04.184801 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 21 23:38:04.184806 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 21 23:38:04.184811 kernel: BTRFS info (device dm-0): enabling free space tree Jan 21 23:38:04.184817 kernel: loop: module loaded Jan 21 23:38:04.184823 kernel: loop0: detected capacity change from 0 to 91488 Jan 21 23:38:04.184828 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 21 23:38:04.184834 systemd[1]: Successfully made /usr/ read-only. Jan 21 23:38:04.184842 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 21 23:38:04.184848 systemd[1]: Detected virtualization microsoft. Jan 21 23:38:04.184854 systemd[1]: Detected architecture arm64. Jan 21 23:38:04.184860 systemd[1]: Running in initrd. Jan 21 23:38:04.184865 systemd[1]: No hostname configured, using default hostname. Jan 21 23:38:04.184871 systemd[1]: Hostname set to . Jan 21 23:38:04.184877 systemd[1]: Initializing machine ID from random generator. Jan 21 23:38:04.184883 systemd[1]: Queued start job for default target initrd.target. Jan 21 23:38:04.184888 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 21 23:38:04.184895 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 21 23:38:04.184901 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 21 23:38:04.184907 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 21 23:38:04.184912 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 21 23:38:04.184919 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 21 23:38:04.184925 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 21 23:38:04.184931 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 21 23:38:04.184937 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 21 23:38:04.184943 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 21 23:38:04.184949 systemd[1]: Reached target paths.target - Path Units. Jan 21 23:38:04.184954 systemd[1]: Reached target slices.target - Slice Units. Jan 21 23:38:04.184960 systemd[1]: Reached target swap.target - Swaps. Jan 21 23:38:04.184966 systemd[1]: Reached target timers.target - Timer Units. Jan 21 23:38:04.184972 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 21 23:38:04.184978 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 21 23:38:04.184983 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 21 23:38:04.184989 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 21 23:38:04.184994 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 21 23:38:04.185000 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 21 23:38:04.185011 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 21 23:38:04.185018 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 21 23:38:04.185023 systemd[1]: Reached target sockets.target - Socket Units. Jan 21 23:38:04.185029 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 21 23:38:04.185036 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 21 23:38:04.185609 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 21 23:38:04.185630 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 21 23:38:04.185638 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 21 23:38:04.185645 systemd[1]: Starting systemd-fsck-usr.service... Jan 21 23:38:04.185651 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 21 23:38:04.185657 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 21 23:38:04.185687 systemd-journald[535]: Collecting audit messages is enabled. Jan 21 23:38:04.185703 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 23:38:04.185710 systemd-journald[535]: Journal started Jan 21 23:38:04.185723 systemd-journald[535]: Runtime Journal (/run/log/journal/d11b9e7aff13482a9309c6c8f5173acf) is 8M, max 78.3M, 70.3M free. Jan 21 23:38:04.203448 systemd[1]: Started systemd-journald.service - Journal Service. Jan 21 23:38:04.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.208692 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 21 23:38:04.248860 kernel: audit: type=1130 audit(1769038684.207:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.248893 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 21 23:38:04.248902 kernel: audit: type=1130 audit(1769038684.231:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.232778 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 21 23:38:04.273467 kernel: Bridge firewalling registered Jan 21 23:38:04.273488 kernel: audit: type=1130 audit(1769038684.257:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.273694 systemd[1]: Finished systemd-fsck-usr.service. Jan 21 23:38:04.278945 systemd-modules-load[538]: Inserted module 'br_netfilter' Jan 21 23:38:04.312580 kernel: audit: type=1130 audit(1769038684.278:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.312604 kernel: audit: type=1130 audit(1769038684.298:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.279835 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 21 23:38:04.301448 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 21 23:38:04.322825 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 21 23:38:04.338984 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 21 23:38:04.352189 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 23:38:04.374459 kernel: audit: type=1130 audit(1769038684.356:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.355116 systemd-tmpfiles[555]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 21 23:38:04.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.368692 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 21 23:38:04.401792 kernel: audit: type=1130 audit(1769038684.379:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.397514 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 21 23:38:04.424692 kernel: audit: type=1130 audit(1769038684.405:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.424594 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 21 23:38:04.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.449071 kernel: audit: type=1130 audit(1769038684.429:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.449345 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 21 23:38:04.459000 audit: BPF prog-id=6 op=LOAD Jan 21 23:38:04.462167 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 21 23:38:04.471915 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 21 23:38:04.492169 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 21 23:38:04.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.509404 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 21 23:38:04.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.532188 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 21 23:38:04.533613 systemd-resolved[562]: Positive Trust Anchors: Jan 21 23:38:04.533621 systemd-resolved[562]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 21 23:38:04.533623 systemd-resolved[562]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 21 23:38:04.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.533642 systemd-resolved[562]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 21 23:38:04.550664 systemd-resolved[562]: Defaulting to hostname 'linux'. Jan 21 23:38:04.551390 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 21 23:38:04.556585 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 21 23:38:04.610484 dracut-cmdline[577]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ca60929099aca00ce2f86d3c34ded0cbc27315310cbe1bd1d91f864aae71550e Jan 21 23:38:04.706133 kernel: Loading iSCSI transport class v2.0-870. Jan 21 23:38:04.723081 kernel: iscsi: registered transport (tcp) Jan 21 23:38:04.740325 kernel: iscsi: registered transport (qla4xxx) Jan 21 23:38:04.740386 kernel: QLogic iSCSI HBA Driver Jan 21 23:38:04.767002 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 21 23:38:04.783829 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 21 23:38:04.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.789356 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 21 23:38:04.837363 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 21 23:38:04.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.843388 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 21 23:38:04.863596 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 21 23:38:04.887304 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 21 23:38:04.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.896000 audit: BPF prog-id=7 op=LOAD Jan 21 23:38:04.896000 audit: BPF prog-id=8 op=LOAD Jan 21 23:38:04.898204 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 21 23:38:04.943333 systemd-udevd[806]: Using default interface naming scheme 'v257'. Jan 21 23:38:04.949400 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 21 23:38:04.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:04.963297 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 21 23:38:04.989091 dracut-pre-trigger[882]: rd.md=0: removing MD RAID activation Jan 21 23:38:05.003279 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 21 23:38:05.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:05.013000 audit: BPF prog-id=9 op=LOAD Jan 21 23:38:05.015478 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 21 23:38:05.035646 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 21 23:38:05.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:05.047193 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 21 23:38:05.064708 systemd-networkd[940]: lo: Link UP Jan 21 23:38:05.067554 systemd-networkd[940]: lo: Gained carrier Jan 21 23:38:05.070761 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 21 23:38:05.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:05.079735 systemd[1]: Reached target network.target - Network. Jan 21 23:38:05.098742 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 21 23:38:05.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:05.110490 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 21 23:38:05.182082 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#291 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 21 23:38:05.209099 kernel: hv_vmbus: registering driver hv_netvsc Jan 21 23:38:05.216728 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 21 23:38:05.216847 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 23:38:05.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:05.233242 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 23:38:05.243478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 23:38:05.271958 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 23:38:05.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:05.304398 systemd-networkd[940]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 23:38:05.304406 systemd-networkd[940]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 21 23:38:05.315101 systemd-networkd[940]: eth0: Link UP Jan 21 23:38:05.315412 systemd-networkd[940]: eth0: Gained carrier Jan 21 23:38:05.315424 systemd-networkd[940]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 23:38:05.342216 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 21 23:38:05.357087 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 21 23:38:05.374834 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 21 23:38:05.401474 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 21 23:38:05.424012 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 21 23:38:05.549194 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 21 23:38:05.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:05.560187 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 21 23:38:05.565814 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 21 23:38:05.575761 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 21 23:38:05.589223 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 21 23:38:05.614747 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 21 23:38:05.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:06.396707 systemd-networkd[940]: eth0: Gained IPv6LL Jan 21 23:38:06.458962 disk-uuid[1051]: Warning: The kernel is still using the old partition table. Jan 21 23:38:06.458962 disk-uuid[1051]: The new table will be used at the next reboot or after you Jan 21 23:38:06.458962 disk-uuid[1051]: run partprobe(8) or kpartx(8) Jan 21 23:38:06.458962 disk-uuid[1051]: The operation has completed successfully. Jan 21 23:38:06.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:06.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:06.469430 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 21 23:38:06.469556 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 21 23:38:06.478271 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 21 23:38:06.528072 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1178) Jan 21 23:38:06.537639 kernel: BTRFS info (device sda6): first mount of filesystem 34fe2ffc-17d8-4635-9624-842bf41e4932 Jan 21 23:38:06.537652 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 21 23:38:06.550105 kernel: BTRFS info (device sda6): turning on async discard Jan 21 23:38:06.550142 kernel: BTRFS info (device sda6): enabling free space tree Jan 21 23:38:06.560063 kernel: BTRFS info (device sda6): last unmount of filesystem 34fe2ffc-17d8-4635-9624-842bf41e4932 Jan 21 23:38:06.560331 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 21 23:38:06.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:06.567220 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 21 23:38:06.837332 ignition[1197]: Ignition 2.22.0 Jan 21 23:38:06.840001 ignition[1197]: Stage: fetch-offline Jan 21 23:38:06.842164 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 21 23:38:06.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:06.840152 ignition[1197]: no configs at "/usr/lib/ignition/base.d" Jan 21 23:38:06.850492 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 21 23:38:06.840162 ignition[1197]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 21 23:38:06.840240 ignition[1197]: parsed url from cmdline: "" Jan 21 23:38:06.840243 ignition[1197]: no config URL provided Jan 21 23:38:06.840247 ignition[1197]: reading system config file "/usr/lib/ignition/user.ign" Jan 21 23:38:06.840253 ignition[1197]: no config at "/usr/lib/ignition/user.ign" Jan 21 23:38:06.840257 ignition[1197]: failed to fetch config: resource requires networking Jan 21 23:38:06.840471 ignition[1197]: Ignition finished successfully Jan 21 23:38:06.889244 ignition[1205]: Ignition 2.22.0 Jan 21 23:38:06.889250 ignition[1205]: Stage: fetch Jan 21 23:38:06.889461 ignition[1205]: no configs at "/usr/lib/ignition/base.d" Jan 21 23:38:06.889467 ignition[1205]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 21 23:38:06.889534 ignition[1205]: parsed url from cmdline: "" Jan 21 23:38:06.889537 ignition[1205]: no config URL provided Jan 21 23:38:06.889544 ignition[1205]: reading system config file "/usr/lib/ignition/user.ign" Jan 21 23:38:06.889549 ignition[1205]: no config at "/usr/lib/ignition/user.ign" Jan 21 23:38:06.889565 ignition[1205]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 21 23:38:06.889708 ignition[1205]: GET error: Get "http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 21 23:38:07.089979 ignition[1205]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #2 Jan 21 23:38:07.090187 ignition[1205]: GET error: Get "http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 21 23:38:07.278457 systemd-networkd[940]: eth0: Lost carrier Jan 21 23:38:07.491024 ignition[1205]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #3 Jan 21 23:38:07.491195 ignition[1205]: GET error: Get "http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 21 23:38:08.291882 ignition[1205]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #4 Jan 21 23:38:08.292116 ignition[1205]: GET error: Get "http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 21 23:38:09.288198 systemd-networkd[940]: eth0: Gained carrier Jan 21 23:38:09.288224 systemd-networkd[940]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 23:38:09.341086 systemd-networkd[940]: eth0: DHCPv4 address 10.200.20.29/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 21 23:38:09.892363 ignition[1205]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #5 Jan 21 23:38:09.992750 ignition[1205]: GET result: OK Jan 21 23:38:09.992815 ignition[1205]: config has been read from IMDS userdata Jan 21 23:38:09.992828 ignition[1205]: parsing config with SHA512: 772100840a12f59b20261eadfe846a2095d17b9213ff8ad929b13f2c7174b9ce4b463f81f38b57310159b55373e8d61d8bc27415355993962d2970b51841feca Jan 21 23:38:09.996247 unknown[1205]: fetched base config from "system" Jan 21 23:38:10.009930 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 21 23:38:10.009950 kernel: audit: type=1130 audit(1769038690.002:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:10.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:09.996700 ignition[1205]: fetch: fetch complete Jan 21 23:38:09.996266 unknown[1205]: fetched base config from "system" Jan 21 23:38:09.996707 ignition[1205]: fetch: fetch passed Jan 21 23:38:09.996271 unknown[1205]: fetched user config from "azure" Jan 21 23:38:09.996770 ignition[1205]: Ignition finished successfully Jan 21 23:38:09.998570 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 21 23:38:10.004587 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 21 23:38:10.056231 ignition[1213]: Ignition 2.22.0 Jan 21 23:38:10.056244 ignition[1213]: Stage: kargs Jan 21 23:38:10.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:10.059974 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 21 23:38:10.086113 kernel: audit: type=1130 audit(1769038690.063:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:10.056478 ignition[1213]: no configs at "/usr/lib/ignition/base.d" Jan 21 23:38:10.065275 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 21 23:38:10.056488 ignition[1213]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 21 23:38:10.056981 ignition[1213]: kargs: kargs passed Jan 21 23:38:10.057022 ignition[1213]: Ignition finished successfully Jan 21 23:38:10.111528 ignition[1219]: Ignition 2.22.0 Jan 21 23:38:10.111543 ignition[1219]: Stage: disks Jan 21 23:38:10.117294 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 21 23:38:10.111721 ignition[1219]: no configs at "/usr/lib/ignition/base.d" Jan 21 23:38:10.144110 kernel: audit: type=1130 audit(1769038690.124:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:10.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:10.111728 ignition[1219]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 21 23:38:10.125890 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 21 23:38:10.112329 ignition[1219]: disks: disks passed Jan 21 23:38:10.145838 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 21 23:38:10.112380 ignition[1219]: Ignition finished successfully Jan 21 23:38:10.155697 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 21 23:38:10.164417 systemd[1]: Reached target sysinit.target - System Initialization. Jan 21 23:38:10.171116 systemd[1]: Reached target basic.target - Basic System. Jan 21 23:38:10.180702 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 21 23:38:10.246304 systemd-fsck[1228]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Jan 21 23:38:10.254458 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 21 23:38:10.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:10.264779 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 21 23:38:10.288010 kernel: audit: type=1130 audit(1769038690.262:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:10.401061 kernel: EXT4-fs (sda9): mounted filesystem f91073cf-b203-416d-af86-ee4485629886 r/w with ordered data mode. Quota mode: none. Jan 21 23:38:10.401730 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 21 23:38:10.408779 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 21 23:38:10.421298 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 21 23:38:10.425889 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 21 23:38:10.439947 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 21 23:38:10.456073 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 21 23:38:10.466773 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1243) Jan 21 23:38:10.456159 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 21 23:38:10.483067 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 21 23:38:10.497130 kernel: BTRFS info (device sda6): first mount of filesystem 34fe2ffc-17d8-4635-9624-842bf41e4932 Jan 21 23:38:10.497152 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 21 23:38:10.493571 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 21 23:38:10.519211 kernel: BTRFS info (device sda6): turning on async discard Jan 21 23:38:10.519252 kernel: BTRFS info (device sda6): enabling free space tree Jan 21 23:38:10.521103 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 21 23:38:10.605840 coreos-metadata[1245]: Jan 21 23:38:10.605 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 21 23:38:10.612666 coreos-metadata[1245]: Jan 21 23:38:10.612 INFO Fetch successful Jan 21 23:38:10.612666 coreos-metadata[1245]: Jan 21 23:38:10.612 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 21 23:38:10.626362 coreos-metadata[1245]: Jan 21 23:38:10.625 INFO Fetch successful Jan 21 23:38:10.630871 coreos-metadata[1245]: Jan 21 23:38:10.630 INFO wrote hostname ci-4515.1.0-n-a0ba06055b to /sysroot/etc/hostname Jan 21 23:38:10.638519 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 21 23:38:10.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:10.663067 kernel: audit: type=1130 audit(1769038690.643:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:10.663113 kernel: hv_netvsc 7ced8db8-828d-7ced-8db8-828d7ced8db8 eth0: VF slot 1 added Jan 21 23:38:10.686364 kernel: hv_vmbus: registering driver hv_pci Jan 21 23:38:10.686421 kernel: hv_pci a2f38eb7-3048-4acb-9b98-9a13c2688456: PCI VMBus probing: Using version 0x10004 Jan 21 23:38:10.698998 kernel: hv_pci a2f38eb7-3048-4acb-9b98-9a13c2688456: PCI host bridge to bus 3048:00 Jan 21 23:38:10.699216 kernel: pci_bus 3048:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 21 23:38:10.699328 initrd-setup-root[1274]: cut: /sysroot/etc/passwd: No such file or directory Jan 21 23:38:10.709154 kernel: pci_bus 3048:00: No busn resource found for root bus, will use [bus 00-ff] Jan 21 23:38:10.715552 kernel: pci 3048:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jan 21 23:38:10.716501 initrd-setup-root[1281]: cut: /sysroot/etc/group: No such file or directory Jan 21 23:38:10.729138 kernel: pci 3048:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 21 23:38:10.734111 kernel: pci 3048:00:02.0: enabling Extended Tags Jan 21 23:38:10.735803 initrd-setup-root[1288]: cut: /sysroot/etc/shadow: No such file or directory Jan 21 23:38:10.750073 kernel: pci 3048:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 3048:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jan 21 23:38:10.759584 kernel: pci_bus 3048:00: busn_res: [bus 00-ff] end is updated to 00 Jan 21 23:38:10.759767 kernel: pci 3048:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jan 21 23:38:10.760189 initrd-setup-root[1295]: cut: /sysroot/etc/gshadow: No such file or directory Jan 21 23:38:10.853350 kernel: mlx5_core 3048:00:02.0: enabling device (0000 -> 0002) Jan 21 23:38:10.862940 kernel: mlx5_core 3048:00:02.0: PTM is not supported by PCIe Jan 21 23:38:10.863150 kernel: mlx5_core 3048:00:02.0: firmware version: 16.30.5026 Jan 21 23:38:11.039001 kernel: hv_netvsc 7ced8db8-828d-7ced-8db8-828d7ced8db8 eth0: VF registering: eth1 Jan 21 23:38:11.039234 kernel: mlx5_core 3048:00:02.0 eth1: joined to eth0 Jan 21 23:38:11.042071 kernel: mlx5_core 3048:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 21 23:38:11.056612 systemd-networkd[940]: eth1: Interface name change detected, renamed to enP12360s1. Jan 21 23:38:11.062615 kernel: mlx5_core 3048:00:02.0 enP12360s1: renamed from eth1 Jan 21 23:38:11.066496 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 21 23:38:11.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:11.090288 kernel: audit: type=1130 audit(1769038691.075:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:11.090406 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 21 23:38:11.102184 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 21 23:38:11.115662 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 21 23:38:11.119721 kernel: BTRFS info (device sda6): last unmount of filesystem 34fe2ffc-17d8-4635-9624-842bf41e4932 Jan 21 23:38:11.140587 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 21 23:38:11.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:11.163802 ignition[1378]: INFO : Ignition 2.22.0 Jan 21 23:38:11.163802 ignition[1378]: INFO : Stage: mount Jan 21 23:38:11.186315 kernel: audit: type=1130 audit(1769038691.149:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:11.186341 kernel: audit: type=1130 audit(1769038691.171:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:11.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:11.168123 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 21 23:38:11.193512 ignition[1378]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 21 23:38:11.193512 ignition[1378]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 21 23:38:11.193512 ignition[1378]: INFO : mount: mount passed Jan 21 23:38:11.193512 ignition[1378]: INFO : Ignition finished successfully Jan 21 23:38:11.186962 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 21 23:38:11.254057 kernel: mlx5_core 3048:00:02.0 enP12360s1: Link up Jan 21 23:38:11.290055 kernel: hv_netvsc 7ced8db8-828d-7ced-8db8-828d7ced8db8 eth0: Data path switched to VF: enP12360s1 Jan 21 23:38:11.290355 systemd-networkd[940]: enP12360s1: Link UP Jan 21 23:38:11.290544 systemd-networkd[940]: enP12360s1: Gained carrier Jan 21 23:38:11.403605 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 21 23:38:11.434059 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1391) Jan 21 23:38:11.444715 kernel: BTRFS info (device sda6): first mount of filesystem 34fe2ffc-17d8-4635-9624-842bf41e4932 Jan 21 23:38:11.444724 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 21 23:38:11.454023 kernel: BTRFS info (device sda6): turning on async discard Jan 21 23:38:11.454067 kernel: BTRFS info (device sda6): enabling free space tree Jan 21 23:38:11.455668 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 21 23:38:11.485572 ignition[1409]: INFO : Ignition 2.22.0 Jan 21 23:38:11.485572 ignition[1409]: INFO : Stage: files Jan 21 23:38:11.485572 ignition[1409]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 21 23:38:11.485572 ignition[1409]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 21 23:38:11.485572 ignition[1409]: DEBUG : files: compiled without relabeling support, skipping Jan 21 23:38:11.506260 ignition[1409]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 21 23:38:11.506260 ignition[1409]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 21 23:38:11.518116 ignition[1409]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 21 23:38:11.518116 ignition[1409]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 21 23:38:11.518116 ignition[1409]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 21 23:38:11.517889 unknown[1409]: wrote ssh authorized keys file for user: core Jan 21 23:38:11.540032 ignition[1409]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 21 23:38:11.540032 ignition[1409]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 21 23:38:11.570918 ignition[1409]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 21 23:38:11.789452 ignition[1409]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 21 23:38:11.797693 ignition[1409]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 21 23:38:11.797693 ignition[1409]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 21 23:38:11.797693 ignition[1409]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 21 23:38:11.797693 ignition[1409]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 21 23:38:11.797693 ignition[1409]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 21 23:38:11.797693 ignition[1409]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 21 23:38:11.797693 ignition[1409]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 21 23:38:11.797693 ignition[1409]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 21 23:38:11.856344 ignition[1409]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 21 23:38:11.856344 ignition[1409]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 21 23:38:11.856344 ignition[1409]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 21 23:38:11.856344 ignition[1409]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 21 23:38:11.856344 ignition[1409]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 21 23:38:11.856344 ignition[1409]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 21 23:38:12.355535 ignition[1409]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 21 23:38:12.627438 ignition[1409]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 21 23:38:12.627438 ignition[1409]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 21 23:38:12.643520 ignition[1409]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 21 23:38:12.657138 ignition[1409]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 21 23:38:12.665205 ignition[1409]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 21 23:38:12.665205 ignition[1409]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 21 23:38:12.665205 ignition[1409]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 21 23:38:12.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:12.701198 ignition[1409]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 21 23:38:12.701198 ignition[1409]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 21 23:38:12.701198 ignition[1409]: INFO : files: files passed Jan 21 23:38:12.701198 ignition[1409]: INFO : Ignition finished successfully Jan 21 23:38:12.731503 kernel: audit: type=1130 audit(1769038692.683:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:12.678111 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 21 23:38:12.686029 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 21 23:38:12.758136 kernel: audit: type=1130 audit(1769038692.740:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:12.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:12.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:12.726618 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 21 23:38:12.733371 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 21 23:38:12.733454 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 21 23:38:12.774230 initrd-setup-root-after-ignition[1439]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 21 23:38:12.780628 initrd-setup-root-after-ignition[1439]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 21 23:38:12.787708 initrd-setup-root-after-ignition[1443]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 21 23:38:12.788603 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 21 23:38:12.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:12.800375 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 21 23:38:12.811831 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 21 23:38:12.847479 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 21 23:38:12.847560 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 21 23:38:12.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:12.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:12.857334 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 21 23:38:12.866029 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 21 23:38:12.874458 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 21 23:38:12.875246 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 21 23:38:12.906499 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 21 23:38:12.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:12.913092 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 21 23:38:12.937430 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 21 23:38:12.937531 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 21 23:38:12.947595 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 21 23:38:12.956680 systemd[1]: Stopped target timers.target - Timer Units. Jan 21 23:38:12.965246 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 21 23:38:12.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:12.965362 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 21 23:38:12.977127 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 21 23:38:12.981346 systemd[1]: Stopped target basic.target - Basic System. Jan 21 23:38:12.989453 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 21 23:38:12.997899 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 21 23:38:13.005965 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 21 23:38:13.015206 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 21 23:38:13.024214 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 21 23:38:13.032890 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 21 23:38:13.042339 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 21 23:38:13.050603 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 21 23:38:13.060223 systemd[1]: Stopped target swap.target - Swaps. Jan 21 23:38:13.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.067482 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 21 23:38:13.067589 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 21 23:38:13.078972 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 21 23:38:13.083683 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 21 23:38:13.092531 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 21 23:38:13.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.101062 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 21 23:38:13.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.106505 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 21 23:38:13.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.106603 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 21 23:38:13.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.119254 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 21 23:38:13.119343 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 21 23:38:13.124631 systemd[1]: ignition-files.service: Deactivated successfully. Jan 21 23:38:13.124696 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 21 23:38:13.134216 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 21 23:38:13.134289 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 21 23:38:13.148223 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 21 23:38:13.171215 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 21 23:38:13.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.206007 ignition[1463]: INFO : Ignition 2.22.0 Jan 21 23:38:13.206007 ignition[1463]: INFO : Stage: umount Jan 21 23:38:13.206007 ignition[1463]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 21 23:38:13.206007 ignition[1463]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 21 23:38:13.206007 ignition[1463]: INFO : umount: umount passed Jan 21 23:38:13.206007 ignition[1463]: INFO : Ignition finished successfully Jan 21 23:38:13.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.187543 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 21 23:38:13.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.187665 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 21 23:38:13.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.200757 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 21 23:38:13.200845 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 21 23:38:13.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.215481 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 21 23:38:13.215571 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 21 23:38:13.224786 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 21 23:38:13.226935 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 21 23:38:13.234293 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 21 23:38:13.236643 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 21 23:38:13.236926 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 21 23:38:13.245560 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 21 23:38:13.245617 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 21 23:38:13.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.251803 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 21 23:38:13.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.251838 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 21 23:38:13.259502 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 21 23:38:13.259533 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 21 23:38:13.267746 systemd[1]: Stopped target network.target - Network. Jan 21 23:38:13.276615 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 21 23:38:13.276659 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 21 23:38:13.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.411000 audit: BPF prog-id=9 op=UNLOAD Jan 21 23:38:13.285358 systemd[1]: Stopped target paths.target - Path Units. Jan 21 23:38:13.294402 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 21 23:38:13.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.298059 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 21 23:38:13.431000 audit: BPF prog-id=6 op=UNLOAD Jan 21 23:38:13.303560 systemd[1]: Stopped target slices.target - Slice Units. Jan 21 23:38:13.311819 systemd[1]: Stopped target sockets.target - Socket Units. Jan 21 23:38:13.320312 systemd[1]: iscsid.socket: Deactivated successfully. Jan 21 23:38:13.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.320363 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 21 23:38:13.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.328547 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 21 23:38:13.328570 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 21 23:38:13.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.336598 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 21 23:38:13.336613 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 21 23:38:13.351403 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 21 23:38:13.351470 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 21 23:38:13.359950 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 21 23:38:13.359988 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 21 23:38:13.368853 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 21 23:38:13.377392 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 21 23:38:13.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.399173 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 21 23:38:13.399289 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 21 23:38:13.413483 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 21 23:38:13.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.413582 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 21 23:38:13.573902 kernel: hv_netvsc 7ced8db8-828d-7ced-8db8-828d7ced8db8 eth0: Data path switched from VF: enP12360s1 Jan 21 23:38:13.427078 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 21 23:38:13.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.437258 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 21 23:38:13.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.437297 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 21 23:38:13.446645 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 21 23:38:13.455636 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 21 23:38:13.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.455701 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 21 23:38:13.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.460718 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 21 23:38:13.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.460750 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 21 23:38:13.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.468837 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 21 23:38:13.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.468873 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 21 23:38:13.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.485268 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 21 23:38:13.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.518858 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 21 23:38:13.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.519224 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 21 23:38:13.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:13.529215 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 21 23:38:13.529249 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 21 23:38:13.537721 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 21 23:38:13.537746 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 21 23:38:13.545849 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 21 23:38:13.545903 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 21 23:38:13.565373 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 21 23:38:13.565433 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 21 23:38:13.578522 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 21 23:38:13.578580 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 21 23:38:13.588816 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 21 23:38:13.604569 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 21 23:38:13.604638 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 21 23:38:13.610247 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 21 23:38:13.610294 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 21 23:38:13.620126 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 21 23:38:13.779031 systemd-journald[535]: Received SIGTERM from PID 1 (systemd). Jan 21 23:38:13.620175 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 21 23:38:13.631000 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 21 23:38:13.631058 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 21 23:38:13.639811 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 21 23:38:13.639855 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 23:38:13.649201 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 21 23:38:13.649295 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 21 23:38:13.656788 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 21 23:38:13.656866 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 21 23:38:13.666386 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 21 23:38:13.666487 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 21 23:38:13.676852 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 21 23:38:13.676951 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 21 23:38:13.685034 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 21 23:38:13.695382 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 21 23:38:13.722029 systemd[1]: Switching root. Jan 21 23:38:13.846573 systemd-journald[535]: Journal stopped Jan 21 23:38:16.007563 kernel: SELinux: policy capability network_peer_controls=1 Jan 21 23:38:16.007582 kernel: SELinux: policy capability open_perms=1 Jan 21 23:38:16.007590 kernel: SELinux: policy capability extended_socket_class=1 Jan 21 23:38:16.007596 kernel: SELinux: policy capability always_check_network=0 Jan 21 23:38:16.007602 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 21 23:38:16.007608 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 21 23:38:16.007615 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 21 23:38:16.007620 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 21 23:38:16.007626 kernel: SELinux: policy capability userspace_initial_context=0 Jan 21 23:38:16.007633 systemd[1]: Successfully loaded SELinux policy in 99.590ms. Jan 21 23:38:16.007641 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.485ms. Jan 21 23:38:16.007649 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 21 23:38:16.007655 systemd[1]: Detected virtualization microsoft. Jan 21 23:38:16.007662 systemd[1]: Detected architecture arm64. Jan 21 23:38:16.007670 systemd[1]: Detected first boot. Jan 21 23:38:16.007677 systemd[1]: Hostname set to . Jan 21 23:38:16.007683 systemd[1]: Initializing machine ID from random generator. Jan 21 23:38:16.007689 zram_generator::config[1505]: No configuration found. Jan 21 23:38:16.007696 kernel: NET: Registered PF_VSOCK protocol family Jan 21 23:38:16.007703 systemd[1]: Populated /etc with preset unit settings. Jan 21 23:38:16.007710 kernel: kauditd_printk_skb: 50 callbacks suppressed Jan 21 23:38:16.007716 kernel: audit: type=1334 audit(1769038695.148:94): prog-id=12 op=LOAD Jan 21 23:38:16.007722 kernel: audit: type=1334 audit(1769038695.152:95): prog-id=3 op=UNLOAD Jan 21 23:38:16.007728 kernel: audit: type=1334 audit(1769038695.153:96): prog-id=13 op=LOAD Jan 21 23:38:16.007734 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 21 23:38:16.007741 kernel: audit: type=1334 audit(1769038695.157:97): prog-id=14 op=LOAD Jan 21 23:38:16.007748 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 21 23:38:16.007754 kernel: audit: type=1334 audit(1769038695.157:98): prog-id=4 op=UNLOAD Jan 21 23:38:16.007760 kernel: audit: type=1334 audit(1769038695.157:99): prog-id=5 op=UNLOAD Jan 21 23:38:16.007766 kernel: audit: type=1131 audit(1769038695.157:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.007774 kernel: audit: type=1130 audit(1769038695.202:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.007782 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 21 23:38:16.007789 kernel: audit: type=1131 audit(1769038695.202:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.007795 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 21 23:38:16.007802 kernel: audit: type=1334 audit(1769038695.203:103): prog-id=12 op=UNLOAD Jan 21 23:38:16.007809 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 21 23:38:16.007816 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 21 23:38:16.007823 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 21 23:38:16.007830 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 21 23:38:16.007837 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 21 23:38:16.007845 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 21 23:38:16.007851 systemd[1]: Created slice user.slice - User and Session Slice. Jan 21 23:38:16.007858 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 21 23:38:16.007866 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 21 23:38:16.007872 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 21 23:38:16.007879 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 21 23:38:16.007886 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 21 23:38:16.007892 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 21 23:38:16.007899 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 21 23:38:16.007906 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 21 23:38:16.007913 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 21 23:38:16.007920 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 21 23:38:16.007927 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 21 23:38:16.007933 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 21 23:38:16.007940 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 21 23:38:16.007946 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 21 23:38:16.007954 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 21 23:38:16.007960 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 21 23:38:16.007967 systemd[1]: Reached target slices.target - Slice Units. Jan 21 23:38:16.007974 systemd[1]: Reached target swap.target - Swaps. Jan 21 23:38:16.007980 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 21 23:38:16.007987 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 21 23:38:16.007995 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 21 23:38:16.008002 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 21 23:38:16.008009 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 21 23:38:16.008016 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 21 23:38:16.008024 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 21 23:38:16.008031 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 21 23:38:16.008038 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 21 23:38:16.008382 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 21 23:38:16.008402 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 21 23:38:16.008410 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 21 23:38:16.008417 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 21 23:38:16.008428 systemd[1]: Mounting media.mount - External Media Directory... Jan 21 23:38:16.008435 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 21 23:38:16.008443 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 21 23:38:16.008449 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 21 23:38:16.008457 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 21 23:38:16.008464 systemd[1]: Reached target machines.target - Containers. Jan 21 23:38:16.008471 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 21 23:38:16.008479 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 23:38:16.008486 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 21 23:38:16.008493 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 21 23:38:16.008499 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 21 23:38:16.008506 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 21 23:38:16.008512 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 21 23:38:16.008520 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 21 23:38:16.008527 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 21 23:38:16.008534 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 21 23:38:16.008541 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 21 23:38:16.008548 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 21 23:38:16.008556 kernel: fuse: init (API version 7.41) Jan 21 23:38:16.008563 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 21 23:38:16.008570 kernel: ACPI: bus type drm_connector registered Jan 21 23:38:16.008577 systemd[1]: Stopped systemd-fsck-usr.service. Jan 21 23:38:16.008584 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 23:38:16.008593 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 21 23:38:16.008600 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 21 23:38:16.008607 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 21 23:38:16.008634 systemd-journald[1610]: Collecting audit messages is enabled. Jan 21 23:38:16.008650 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 21 23:38:16.008657 systemd-journald[1610]: Journal started Jan 21 23:38:16.008673 systemd-journald[1610]: Runtime Journal (/run/log/journal/3adcabd8a23b482fa121457bffffeaf4) is 8M, max 78.3M, 70.3M free. Jan 21 23:38:15.560000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 21 23:38:15.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:15.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:15.953000 audit: BPF prog-id=14 op=UNLOAD Jan 21 23:38:15.953000 audit: BPF prog-id=13 op=UNLOAD Jan 21 23:38:15.954000 audit: BPF prog-id=15 op=LOAD Jan 21 23:38:15.954000 audit: BPF prog-id=16 op=LOAD Jan 21 23:38:15.954000 audit: BPF prog-id=17 op=LOAD Jan 21 23:38:16.004000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 21 23:38:16.004000 audit[1610]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=fffff9c755e0 a2=4000 a3=0 items=0 ppid=1 pid=1610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:16.004000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 21 23:38:15.146064 systemd[1]: Queued start job for default target multi-user.target. Jan 21 23:38:15.158820 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 21 23:38:15.159248 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 21 23:38:15.159535 systemd[1]: systemd-journald.service: Consumed 2.515s CPU time. Jan 21 23:38:16.032526 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 21 23:38:16.047074 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 21 23:38:16.056747 systemd[1]: Started systemd-journald.service - Journal Service. Jan 21 23:38:16.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.057763 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 21 23:38:16.062461 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 21 23:38:16.067442 systemd[1]: Mounted media.mount - External Media Directory. Jan 21 23:38:16.072007 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 21 23:38:16.076882 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 21 23:38:16.081618 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 21 23:38:16.086362 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 21 23:38:16.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.092200 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 21 23:38:16.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.098581 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 21 23:38:16.098771 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 21 23:38:16.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.105063 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 21 23:38:16.105249 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 21 23:38:16.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.110625 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 21 23:38:16.110825 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 21 23:38:16.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.115742 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 21 23:38:16.115935 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 21 23:38:16.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.122411 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 21 23:38:16.122601 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 21 23:38:16.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.127761 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 21 23:38:16.127952 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 21 23:38:16.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.133424 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 21 23:38:16.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.138670 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 21 23:38:16.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.145251 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 21 23:38:16.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.151465 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 21 23:38:16.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.157595 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 21 23:38:16.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.171862 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 21 23:38:16.177233 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 21 23:38:16.183761 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 21 23:38:16.199324 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 21 23:38:16.204538 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 21 23:38:16.204564 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 21 23:38:16.209701 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 21 23:38:16.216131 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 23:38:16.216218 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 23:38:16.218955 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 21 23:38:16.225175 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 21 23:38:16.231337 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 21 23:38:16.234256 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 21 23:38:16.242161 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 21 23:38:16.242916 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 21 23:38:16.254814 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 21 23:38:16.255995 systemd-journald[1610]: Time spent on flushing to /var/log/journal/3adcabd8a23b482fa121457bffffeaf4 is 37.208ms for 1077 entries. Jan 21 23:38:16.255995 systemd-journald[1610]: System Journal (/var/log/journal/3adcabd8a23b482fa121457bffffeaf4) is 8M, max 2.2G, 2.2G free. Jan 21 23:38:16.324924 systemd-journald[1610]: Received client request to flush runtime journal. Jan 21 23:38:16.324962 kernel: loop1: detected capacity change from 0 to 211168 Jan 21 23:38:16.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.266265 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 21 23:38:16.274591 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 21 23:38:16.280447 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 21 23:38:16.286985 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 21 23:38:16.293852 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 21 23:38:16.302181 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 21 23:38:16.320123 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 21 23:38:16.326186 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 21 23:38:16.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.346488 systemd-tmpfiles[1647]: ACLs are not supported, ignoring. Jan 21 23:38:16.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.348104 systemd-tmpfiles[1647]: ACLs are not supported, ignoring. Jan 21 23:38:16.353188 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 21 23:38:16.363218 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 21 23:38:16.395035 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 21 23:38:16.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.415072 kernel: loop2: detected capacity change from 0 to 109872 Jan 21 23:38:16.427174 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 21 23:38:16.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.432000 audit: BPF prog-id=18 op=LOAD Jan 21 23:38:16.432000 audit: BPF prog-id=19 op=LOAD Jan 21 23:38:16.432000 audit: BPF prog-id=20 op=LOAD Jan 21 23:38:16.435204 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 21 23:38:16.440000 audit: BPF prog-id=21 op=LOAD Jan 21 23:38:16.443793 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 21 23:38:16.452165 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 21 23:38:16.462000 audit: BPF prog-id=22 op=LOAD Jan 21 23:38:16.463000 audit: BPF prog-id=23 op=LOAD Jan 21 23:38:16.463000 audit: BPF prog-id=24 op=LOAD Jan 21 23:38:16.464607 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 21 23:38:16.472000 audit: BPF prog-id=25 op=LOAD Jan 21 23:38:16.472000 audit: BPF prog-id=26 op=LOAD Jan 21 23:38:16.472000 audit: BPF prog-id=27 op=LOAD Jan 21 23:38:16.474584 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 21 23:38:16.485806 systemd-tmpfiles[1667]: ACLs are not supported, ignoring. Jan 21 23:38:16.485821 systemd-tmpfiles[1667]: ACLs are not supported, ignoring. Jan 21 23:38:16.491207 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 21 23:38:16.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.522160 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 21 23:38:16.523880 systemd-nsresourced[1669]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 21 23:38:16.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.527407 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 21 23:38:16.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.626069 kernel: loop3: detected capacity change from 0 to 27736 Jan 21 23:38:16.638966 systemd-oomd[1665]: No swap; memory pressure usage will be degraded Jan 21 23:38:16.639583 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 21 23:38:16.646739 systemd-resolved[1666]: Positive Trust Anchors: Jan 21 23:38:16.646753 systemd-resolved[1666]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 21 23:38:16.646757 systemd-resolved[1666]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 21 23:38:16.646776 systemd-resolved[1666]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 21 23:38:16.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.664208 systemd-resolved[1666]: Using system hostname 'ci-4515.1.0-n-a0ba06055b'. Jan 21 23:38:16.671990 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 21 23:38:16.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.677665 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 21 23:38:16.729807 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 21 23:38:16.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.734000 audit: BPF prog-id=8 op=UNLOAD Jan 21 23:38:16.734000 audit: BPF prog-id=7 op=UNLOAD Jan 21 23:38:16.735000 audit: BPF prog-id=28 op=LOAD Jan 21 23:38:16.735000 audit: BPF prog-id=29 op=LOAD Jan 21 23:38:16.736880 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 21 23:38:16.759089 kernel: loop4: detected capacity change from 0 to 100192 Jan 21 23:38:16.763886 systemd-udevd[1690]: Using default interface naming scheme 'v257'. Jan 21 23:38:16.836505 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 21 23:38:16.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.845000 audit: BPF prog-id=30 op=LOAD Jan 21 23:38:16.847870 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 21 23:38:16.876076 kernel: loop5: detected capacity change from 0 to 211168 Jan 21 23:38:16.902060 kernel: loop6: detected capacity change from 0 to 109872 Jan 21 23:38:16.915559 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 21 23:38:16.921059 kernel: loop7: detected capacity change from 0 to 27736 Jan 21 23:38:16.932945 systemd-networkd[1703]: lo: Link UP Jan 21 23:38:16.932955 systemd-networkd[1703]: lo: Gained carrier Jan 21 23:38:16.934704 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 21 23:38:16.935265 systemd-networkd[1703]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 23:38:16.935268 systemd-networkd[1703]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 21 23:38:16.941064 kernel: loop1: detected capacity change from 0 to 100192 Jan 21 23:38:16.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:16.945899 systemd[1]: Reached target network.target - Network. Jan 21 23:38:16.951519 (sd-merge)[1707]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Jan 21 23:38:16.953899 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 21 23:38:16.955086 (sd-merge)[1707]: Merged extensions into '/usr'. Jan 21 23:38:16.962846 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 21 23:38:16.970369 systemd[1]: Reload requested from client PID 1645 ('systemd-sysext') (unit systemd-sysext.service)... Jan 21 23:38:16.970378 systemd[1]: Reloading... Jan 21 23:38:16.977058 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#272 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 21 23:38:17.004248 kernel: mousedev: PS/2 mouse device common for all mice Jan 21 23:38:17.012469 kernel: mlx5_core 3048:00:02.0 enP12360s1: Link up Jan 21 23:38:17.038182 kernel: hv_netvsc 7ced8db8-828d-7ced-8db8-828d7ced8db8 eth0: Data path switched to VF: enP12360s1 Jan 21 23:38:17.042991 systemd-networkd[1703]: enP12360s1: Link UP Jan 21 23:38:17.044884 systemd-networkd[1703]: eth0: Link UP Jan 21 23:38:17.046227 systemd-networkd[1703]: eth0: Gained carrier Jan 21 23:38:17.046309 systemd-networkd[1703]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 23:38:17.055642 systemd-networkd[1703]: enP12360s1: Gained carrier Jan 21 23:38:17.077109 systemd-networkd[1703]: eth0: DHCPv4 address 10.200.20.29/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 21 23:38:17.092066 zram_generator::config[1780]: No configuration found. Jan 21 23:38:17.092158 kernel: hv_vmbus: registering driver hv_balloon Jan 21 23:38:17.095880 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 21 23:38:17.099725 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 21 23:38:17.119761 kernel: hv_vmbus: registering driver hyperv_fb Jan 21 23:38:17.119821 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 21 23:38:17.125416 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 21 23:38:17.137242 kernel: Console: switching to colour dummy device 80x25 Jan 21 23:38:17.167067 kernel: Console: switching to colour frame buffer device 128x48 Jan 21 23:38:17.233074 kernel: MACsec IEEE 802.1AE Jan 21 23:38:17.353358 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 21 23:38:17.353493 systemd[1]: Reloading finished in 382 ms. Jan 21 23:38:17.376076 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 21 23:38:17.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:17.381548 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 21 23:38:17.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:17.411148 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 21 23:38:17.434120 systemd[1]: Starting ensure-sysext.service... Jan 21 23:38:17.440155 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 21 23:38:17.451689 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 21 23:38:17.459132 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 23:38:17.465000 audit: BPF prog-id=31 op=LOAD Jan 21 23:38:17.465000 audit: BPF prog-id=25 op=UNLOAD Jan 21 23:38:17.465000 audit: BPF prog-id=32 op=LOAD Jan 21 23:38:17.465000 audit: BPF prog-id=33 op=LOAD Jan 21 23:38:17.465000 audit: BPF prog-id=26 op=UNLOAD Jan 21 23:38:17.465000 audit: BPF prog-id=27 op=UNLOAD Jan 21 23:38:17.466000 audit: BPF prog-id=34 op=LOAD Jan 21 23:38:17.466000 audit: BPF prog-id=35 op=LOAD Jan 21 23:38:17.466000 audit: BPF prog-id=28 op=UNLOAD Jan 21 23:38:17.466000 audit: BPF prog-id=29 op=UNLOAD Jan 21 23:38:17.466000 audit: BPF prog-id=36 op=LOAD Jan 21 23:38:17.467000 audit: BPF prog-id=30 op=UNLOAD Jan 21 23:38:17.467000 audit: BPF prog-id=37 op=LOAD Jan 21 23:38:17.467000 audit: BPF prog-id=22 op=UNLOAD Jan 21 23:38:17.467000 audit: BPF prog-id=38 op=LOAD Jan 21 23:38:17.467000 audit: BPF prog-id=39 op=LOAD Jan 21 23:38:17.467000 audit: BPF prog-id=23 op=UNLOAD Jan 21 23:38:17.467000 audit: BPF prog-id=24 op=UNLOAD Jan 21 23:38:17.467000 audit: BPF prog-id=40 op=LOAD Jan 21 23:38:17.469000 audit: BPF prog-id=21 op=UNLOAD Jan 21 23:38:17.469000 audit: BPF prog-id=41 op=LOAD Jan 21 23:38:17.469000 audit: BPF prog-id=18 op=UNLOAD Jan 21 23:38:17.469000 audit: BPF prog-id=42 op=LOAD Jan 21 23:38:17.469000 audit: BPF prog-id=43 op=LOAD Jan 21 23:38:17.469000 audit: BPF prog-id=19 op=UNLOAD Jan 21 23:38:17.469000 audit: BPF prog-id=20 op=UNLOAD Jan 21 23:38:17.470000 audit: BPF prog-id=44 op=LOAD Jan 21 23:38:17.470000 audit: BPF prog-id=15 op=UNLOAD Jan 21 23:38:17.470000 audit: BPF prog-id=45 op=LOAD Jan 21 23:38:17.470000 audit: BPF prog-id=46 op=LOAD Jan 21 23:38:17.470000 audit: BPF prog-id=16 op=UNLOAD Jan 21 23:38:17.470000 audit: BPF prog-id=17 op=UNLOAD Jan 21 23:38:17.475744 systemd[1]: Reload requested from client PID 1899 ('systemctl') (unit ensure-sysext.service)... Jan 21 23:38:17.475758 systemd[1]: Reloading... Jan 21 23:38:17.478740 systemd-tmpfiles[1901]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 21 23:38:17.478762 systemd-tmpfiles[1901]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 21 23:38:17.478965 systemd-tmpfiles[1901]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 21 23:38:17.480480 systemd-tmpfiles[1901]: ACLs are not supported, ignoring. Jan 21 23:38:17.480531 systemd-tmpfiles[1901]: ACLs are not supported, ignoring. Jan 21 23:38:17.493567 systemd-tmpfiles[1901]: Detected autofs mount point /boot during canonicalization of boot. Jan 21 23:38:17.493679 systemd-tmpfiles[1901]: Skipping /boot Jan 21 23:38:17.501424 systemd-tmpfiles[1901]: Detected autofs mount point /boot during canonicalization of boot. Jan 21 23:38:17.501507 systemd-tmpfiles[1901]: Skipping /boot Jan 21 23:38:17.544153 zram_generator::config[1944]: No configuration found. Jan 21 23:38:17.696634 systemd[1]: Reloading finished in 220 ms. Jan 21 23:38:17.719000 audit: BPF prog-id=47 op=LOAD Jan 21 23:38:17.719000 audit: BPF prog-id=40 op=UNLOAD Jan 21 23:38:17.720000 audit: BPF prog-id=48 op=LOAD Jan 21 23:38:17.720000 audit: BPF prog-id=37 op=UNLOAD Jan 21 23:38:17.720000 audit: BPF prog-id=49 op=LOAD Jan 21 23:38:17.720000 audit: BPF prog-id=50 op=LOAD Jan 21 23:38:17.720000 audit: BPF prog-id=38 op=UNLOAD Jan 21 23:38:17.720000 audit: BPF prog-id=39 op=UNLOAD Jan 21 23:38:17.720000 audit: BPF prog-id=51 op=LOAD Jan 21 23:38:17.720000 audit: BPF prog-id=52 op=LOAD Jan 21 23:38:17.720000 audit: BPF prog-id=34 op=UNLOAD Jan 21 23:38:17.720000 audit: BPF prog-id=35 op=UNLOAD Jan 21 23:38:17.721000 audit: BPF prog-id=53 op=LOAD Jan 21 23:38:17.721000 audit: BPF prog-id=41 op=UNLOAD Jan 21 23:38:17.721000 audit: BPF prog-id=54 op=LOAD Jan 21 23:38:17.721000 audit: BPF prog-id=55 op=LOAD Jan 21 23:38:17.721000 audit: BPF prog-id=42 op=UNLOAD Jan 21 23:38:17.721000 audit: BPF prog-id=43 op=UNLOAD Jan 21 23:38:17.721000 audit: BPF prog-id=56 op=LOAD Jan 21 23:38:17.721000 audit: BPF prog-id=44 op=UNLOAD Jan 21 23:38:17.721000 audit: BPF prog-id=57 op=LOAD Jan 21 23:38:17.721000 audit: BPF prog-id=58 op=LOAD Jan 21 23:38:17.721000 audit: BPF prog-id=45 op=UNLOAD Jan 21 23:38:17.721000 audit: BPF prog-id=46 op=UNLOAD Jan 21 23:38:17.722000 audit: BPF prog-id=59 op=LOAD Jan 21 23:38:17.722000 audit: BPF prog-id=31 op=UNLOAD Jan 21 23:38:17.722000 audit: BPF prog-id=60 op=LOAD Jan 21 23:38:17.722000 audit: BPF prog-id=61 op=LOAD Jan 21 23:38:17.722000 audit: BPF prog-id=32 op=UNLOAD Jan 21 23:38:17.722000 audit: BPF prog-id=33 op=UNLOAD Jan 21 23:38:17.722000 audit: BPF prog-id=62 op=LOAD Jan 21 23:38:17.722000 audit: BPF prog-id=36 op=UNLOAD Jan 21 23:38:17.728065 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 21 23:38:17.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:17.734076 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 21 23:38:17.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:17.740923 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 23:38:17.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:17.752932 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 21 23:38:17.764820 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 21 23:38:17.775867 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 21 23:38:17.782639 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 21 23:38:17.789068 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 21 23:38:17.798997 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 23:38:17.799828 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 21 23:38:17.802000 audit[2011]: SYSTEM_BOOT pid=2011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 21 23:38:17.807541 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 21 23:38:17.813856 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 21 23:38:17.819437 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 23:38:17.819584 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 23:38:17.819652 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 23:38:17.822676 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 23:38:17.822809 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 23:38:17.822899 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 23:38:17.822953 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 23:38:17.824865 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 21 23:38:17.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:17.832567 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 21 23:38:17.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:17.843481 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 23:38:17.844505 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 21 23:38:17.849008 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 23:38:17.849157 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 23:38:17.849224 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 23:38:17.849314 systemd[1]: Reached target time-set.target - System Time Set. Jan 21 23:38:17.857139 systemd[1]: Finished ensure-sysext.service. Jan 21 23:38:17.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:17.862544 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 21 23:38:17.862722 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 21 23:38:17.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:17.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:17.869136 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 21 23:38:17.869287 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 21 23:38:17.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:17.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:17.875002 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 21 23:38:17.875222 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 21 23:38:17.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:17.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:17.880314 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 21 23:38:17.880452 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 21 23:38:17.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:17.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:17.887748 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 21 23:38:17.887807 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 21 23:38:17.902000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 21 23:38:17.902000 audit[2037]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff2518230 a2=420 a3=0 items=0 ppid=2002 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:17.902000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 21 23:38:17.904155 augenrules[2037]: No rules Jan 21 23:38:17.905466 systemd[1]: audit-rules.service: Deactivated successfully. Jan 21 23:38:17.905706 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 21 23:38:17.951003 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 21 23:38:17.957010 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 21 23:38:18.812154 systemd-networkd[1703]: eth0: Gained IPv6LL Jan 21 23:38:18.815138 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 21 23:38:18.821383 systemd[1]: Reached target network-online.target - Network is Online. Jan 21 23:38:19.068200 ldconfig[2004]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 21 23:38:19.077295 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 21 23:38:19.083498 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 21 23:38:19.095191 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 21 23:38:19.099906 systemd[1]: Reached target sysinit.target - System Initialization. Jan 21 23:38:19.104514 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 21 23:38:19.109699 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 21 23:38:19.115287 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 21 23:38:19.119852 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 21 23:38:19.125094 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 21 23:38:19.131000 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 21 23:38:19.135710 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 21 23:38:19.141124 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 21 23:38:19.141155 systemd[1]: Reached target paths.target - Path Units. Jan 21 23:38:19.144984 systemd[1]: Reached target timers.target - Timer Units. Jan 21 23:38:19.153230 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 21 23:38:19.159097 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 21 23:38:19.164478 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 21 23:38:19.169900 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 21 23:38:19.175276 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 21 23:38:19.181290 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 21 23:38:19.185934 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 21 23:38:19.191513 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 21 23:38:19.195912 systemd[1]: Reached target sockets.target - Socket Units. Jan 21 23:38:19.199954 systemd[1]: Reached target basic.target - Basic System. Jan 21 23:38:19.203811 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 21 23:38:19.203833 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 21 23:38:19.205996 systemd[1]: Starting chronyd.service - NTP client/server... Jan 21 23:38:19.218156 systemd[1]: Starting containerd.service - containerd container runtime... Jan 21 23:38:19.223451 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 21 23:38:19.233252 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 21 23:38:19.243556 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 21 23:38:19.244610 chronyd[2050]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 21 23:38:19.250143 chronyd[2050]: Timezone right/UTC failed leap second check, ignoring Jan 21 23:38:19.250283 chronyd[2050]: Loaded seccomp filter (level 2) Jan 21 23:38:19.251146 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 21 23:38:19.263313 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 21 23:38:19.267332 jq[2058]: false Jan 21 23:38:19.267984 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 21 23:38:19.271207 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 21 23:38:19.275624 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 21 23:38:19.276698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 23:38:19.278948 KVP[2060]: KVP starting; pid is:2060 Jan 21 23:38:19.283579 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 21 23:38:19.290787 kernel: hv_utils: KVP IC version 4.0 Jan 21 23:38:19.290308 KVP[2060]: KVP LIC Version: 3.1 Jan 21 23:38:19.290944 extend-filesystems[2059]: Found /dev/sda6 Jan 21 23:38:19.293995 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 21 23:38:19.302152 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 21 23:38:19.303154 extend-filesystems[2059]: Found /dev/sda9 Jan 21 23:38:19.312852 extend-filesystems[2059]: Checking size of /dev/sda9 Jan 21 23:38:19.316844 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 21 23:38:19.327385 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 21 23:38:19.334542 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 21 23:38:19.339082 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 21 23:38:19.339452 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 21 23:38:19.341173 systemd[1]: Starting update-engine.service - Update Engine... Jan 21 23:38:19.347438 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 21 23:38:19.351162 extend-filesystems[2059]: Resized partition /dev/sda9 Jan 21 23:38:19.366294 extend-filesystems[2095]: resize2fs 1.47.3 (8-Jul-2025) Jan 21 23:38:19.420621 kernel: EXT4-fs (sda9): resizing filesystem from 6359552 to 6376955 blocks Jan 21 23:38:19.420730 kernel: EXT4-fs (sda9): resized filesystem to 6376955 Jan 21 23:38:19.354948 systemd[1]: Started chronyd.service - NTP client/server. Jan 21 23:38:19.420833 update_engine[2086]: I20260121 23:38:19.377949 2086 main.cc:92] Flatcar Update Engine starting Jan 21 23:38:19.382775 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 21 23:38:19.421974 jq[2090]: true Jan 21 23:38:19.388161 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 21 23:38:19.388363 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 21 23:38:19.389401 systemd[1]: motdgen.service: Deactivated successfully. Jan 21 23:38:19.389555 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 21 23:38:19.398643 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 21 23:38:19.413207 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 21 23:38:19.413451 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 21 23:38:19.448948 jq[2112]: true Jan 21 23:38:19.463344 extend-filesystems[2095]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 21 23:38:19.463344 extend-filesystems[2095]: old_desc_blocks = 4, new_desc_blocks = 4 Jan 21 23:38:19.463344 extend-filesystems[2095]: The filesystem on /dev/sda9 is now 6376955 (4k) blocks long. Jan 21 23:38:19.538679 extend-filesystems[2059]: Resized filesystem in /dev/sda9 Jan 21 23:38:19.575865 update_engine[2086]: I20260121 23:38:19.533265 2086 update_check_scheduler.cc:74] Next update check in 9m42s Jan 21 23:38:19.469162 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 21 23:38:19.528527 dbus-daemon[2053]: [system] SELinux support is enabled Jan 21 23:38:19.576154 bash[2153]: Updated "/home/core/.ssh/authorized_keys" Jan 21 23:38:19.469415 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 21 23:38:19.545673 dbus-daemon[2053]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 21 23:38:19.477974 systemd-logind[2082]: New seat seat0. Jan 21 23:38:19.480860 systemd-logind[2082]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 21 23:38:19.486532 systemd[1]: Started systemd-logind.service - User Login Management. Jan 21 23:38:19.528773 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 21 23:38:19.545012 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 21 23:38:19.545033 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 21 23:38:19.554368 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 21 23:38:19.554384 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 21 23:38:19.580384 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 21 23:38:19.589095 systemd[1]: Started update-engine.service - Update Engine. Jan 21 23:38:19.594355 tar[2108]: linux-arm64/LICENSE Jan 21 23:38:19.594355 tar[2108]: linux-arm64/helm Jan 21 23:38:19.595403 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 21 23:38:19.596794 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 21 23:38:19.620356 coreos-metadata[2052]: Jan 21 23:38:19.619 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 21 23:38:19.626349 coreos-metadata[2052]: Jan 21 23:38:19.626 INFO Fetch successful Jan 21 23:38:19.626698 coreos-metadata[2052]: Jan 21 23:38:19.626 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 21 23:38:19.634992 coreos-metadata[2052]: Jan 21 23:38:19.634 INFO Fetch successful Jan 21 23:38:19.634992 coreos-metadata[2052]: Jan 21 23:38:19.634 INFO Fetching http://168.63.129.16/machine/47a3ab8f-16d9-4427-a359-c5a9b18e2cc6/5e5ab0ec%2D064e%2D4672%2D89df%2D89511c6d97c4.%5Fci%2D4515.1.0%2Dn%2Da0ba06055b?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 21 23:38:19.636742 coreos-metadata[2052]: Jan 21 23:38:19.636 INFO Fetch successful Jan 21 23:38:19.636987 coreos-metadata[2052]: Jan 21 23:38:19.636 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 21 23:38:19.650297 coreos-metadata[2052]: Jan 21 23:38:19.649 INFO Fetch successful Jan 21 23:38:19.702381 containerd[2113]: time="2026-01-21T23:38:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 21 23:38:19.706489 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 21 23:38:19.711346 containerd[2113]: time="2026-01-21T23:38:19.711227372Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 21 23:38:19.714030 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 21 23:38:19.731931 containerd[2113]: time="2026-01-21T23:38:19.731893220Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.808µs" Jan 21 23:38:19.732473 containerd[2113]: time="2026-01-21T23:38:19.732450444Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 21 23:38:19.732651 containerd[2113]: time="2026-01-21T23:38:19.732635500Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 21 23:38:19.732802 containerd[2113]: time="2026-01-21T23:38:19.732787924Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 21 23:38:19.733076 containerd[2113]: time="2026-01-21T23:38:19.733027508Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 21 23:38:19.733514 containerd[2113]: time="2026-01-21T23:38:19.733497372Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 21 23:38:19.733749 containerd[2113]: time="2026-01-21T23:38:19.733732188Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 21 23:38:19.734037 containerd[2113]: time="2026-01-21T23:38:19.734020244Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 21 23:38:19.734445 containerd[2113]: time="2026-01-21T23:38:19.734421692Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 21 23:38:19.734999 containerd[2113]: time="2026-01-21T23:38:19.734979548Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 21 23:38:19.735164 containerd[2113]: time="2026-01-21T23:38:19.735150492Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 21 23:38:19.735251 containerd[2113]: time="2026-01-21T23:38:19.735235148Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 21 23:38:19.735546 containerd[2113]: time="2026-01-21T23:38:19.735527124Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 21 23:38:19.735945 containerd[2113]: time="2026-01-21T23:38:19.735927740Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 21 23:38:19.736245 containerd[2113]: time="2026-01-21T23:38:19.736227332Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 21 23:38:19.737031 containerd[2113]: time="2026-01-21T23:38:19.736628644Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 21 23:38:19.737031 containerd[2113]: time="2026-01-21T23:38:19.736659436Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 21 23:38:19.737031 containerd[2113]: time="2026-01-21T23:38:19.736666100Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 21 23:38:19.737031 containerd[2113]: time="2026-01-21T23:38:19.736698276Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 21 23:38:19.741355 containerd[2113]: time="2026-01-21T23:38:19.740448908Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 21 23:38:19.741355 containerd[2113]: time="2026-01-21T23:38:19.740531212Z" level=info msg="metadata content store policy set" policy=shared Jan 21 23:38:19.758727 containerd[2113]: time="2026-01-21T23:38:19.758178492Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 21 23:38:19.758727 containerd[2113]: time="2026-01-21T23:38:19.758233900Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 21 23:38:19.758727 containerd[2113]: time="2026-01-21T23:38:19.758356540Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 21 23:38:19.758727 containerd[2113]: time="2026-01-21T23:38:19.758367716Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 21 23:38:19.758727 containerd[2113]: time="2026-01-21T23:38:19.758376628Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 21 23:38:19.758727 containerd[2113]: time="2026-01-21T23:38:19.758384100Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 21 23:38:19.758727 containerd[2113]: time="2026-01-21T23:38:19.758393500Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 21 23:38:19.758727 containerd[2113]: time="2026-01-21T23:38:19.758399508Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 21 23:38:19.758727 containerd[2113]: time="2026-01-21T23:38:19.758406844Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 21 23:38:19.758727 containerd[2113]: time="2026-01-21T23:38:19.758414340Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 21 23:38:19.758727 containerd[2113]: time="2026-01-21T23:38:19.758421204Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 21 23:38:19.758727 containerd[2113]: time="2026-01-21T23:38:19.758427740Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 21 23:38:19.758727 containerd[2113]: time="2026-01-21T23:38:19.758438748Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 21 23:38:19.758727 containerd[2113]: time="2026-01-21T23:38:19.758449828Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 21 23:38:19.758935 containerd[2113]: time="2026-01-21T23:38:19.758550204Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 21 23:38:19.758935 containerd[2113]: time="2026-01-21T23:38:19.758563724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 21 23:38:19.758935 containerd[2113]: time="2026-01-21T23:38:19.758572708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 21 23:38:19.758935 containerd[2113]: time="2026-01-21T23:38:19.758578892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 21 23:38:19.758935 containerd[2113]: time="2026-01-21T23:38:19.758586420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 21 23:38:19.758935 containerd[2113]: time="2026-01-21T23:38:19.758593108Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 21 23:38:19.758935 containerd[2113]: time="2026-01-21T23:38:19.758608380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 21 23:38:19.758935 containerd[2113]: time="2026-01-21T23:38:19.758614956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 21 23:38:19.758935 containerd[2113]: time="2026-01-21T23:38:19.758622204Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 21 23:38:19.758935 containerd[2113]: time="2026-01-21T23:38:19.758635516Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 21 23:38:19.758935 containerd[2113]: time="2026-01-21T23:38:19.758642076Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 21 23:38:19.758935 containerd[2113]: time="2026-01-21T23:38:19.758662532Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 21 23:38:19.758935 containerd[2113]: time="2026-01-21T23:38:19.758693780Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 21 23:38:19.758935 containerd[2113]: time="2026-01-21T23:38:19.758702428Z" level=info msg="Start snapshots syncer" Jan 21 23:38:19.759537 containerd[2113]: time="2026-01-21T23:38:19.759498172Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 21 23:38:19.760062 containerd[2113]: time="2026-01-21T23:38:19.760018580Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 21 23:38:19.760491 containerd[2113]: time="2026-01-21T23:38:19.760473884Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 21 23:38:19.760825 containerd[2113]: time="2026-01-21T23:38:19.760770828Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 21 23:38:19.760985 containerd[2113]: time="2026-01-21T23:38:19.760967092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 21 23:38:19.761297 containerd[2113]: time="2026-01-21T23:38:19.761202284Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 21 23:38:19.761297 containerd[2113]: time="2026-01-21T23:38:19.761219564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 21 23:38:19.761297 containerd[2113]: time="2026-01-21T23:38:19.761226740Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 21 23:38:19.761297 containerd[2113]: time="2026-01-21T23:38:19.761237580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 21 23:38:19.761297 containerd[2113]: time="2026-01-21T23:38:19.761244292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 21 23:38:19.761297 containerd[2113]: time="2026-01-21T23:38:19.761254428Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 21 23:38:19.761297 containerd[2113]: time="2026-01-21T23:38:19.761261356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 21 23:38:19.761297 containerd[2113]: time="2026-01-21T23:38:19.761268212Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 21 23:38:19.761248 locksmithd[2182]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 21 23:38:19.762155 containerd[2113]: time="2026-01-21T23:38:19.761792108Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 21 23:38:19.762155 containerd[2113]: time="2026-01-21T23:38:19.761816868Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 21 23:38:19.762155 containerd[2113]: time="2026-01-21T23:38:19.761823180Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 21 23:38:19.762155 containerd[2113]: time="2026-01-21T23:38:19.761911268Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 21 23:38:19.762155 containerd[2113]: time="2026-01-21T23:38:19.761923260Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 21 23:38:19.762155 containerd[2113]: time="2026-01-21T23:38:19.761936260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 21 23:38:19.762155 containerd[2113]: time="2026-01-21T23:38:19.761945636Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 21 23:38:19.762155 containerd[2113]: time="2026-01-21T23:38:19.761954556Z" level=info msg="runtime interface created" Jan 21 23:38:19.762155 containerd[2113]: time="2026-01-21T23:38:19.761957996Z" level=info msg="created NRI interface" Jan 21 23:38:19.762155 containerd[2113]: time="2026-01-21T23:38:19.761962828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 21 23:38:19.762155 containerd[2113]: time="2026-01-21T23:38:19.761971460Z" level=info msg="Connect containerd service" Jan 21 23:38:19.762155 containerd[2113]: time="2026-01-21T23:38:19.761990820Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 21 23:38:19.764159 containerd[2113]: time="2026-01-21T23:38:19.763602084Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 21 23:38:19.853743 sshd_keygen[2087]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 21 23:38:19.889185 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 21 23:38:19.898349 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 21 23:38:19.907292 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 21 23:38:19.909425 containerd[2113]: time="2026-01-21T23:38:19.908188668Z" level=info msg="Start subscribing containerd event" Jan 21 23:38:19.909425 containerd[2113]: time="2026-01-21T23:38:19.908243116Z" level=info msg="Start recovering state" Jan 21 23:38:19.909425 containerd[2113]: time="2026-01-21T23:38:19.908325772Z" level=info msg="Start event monitor" Jan 21 23:38:19.909425 containerd[2113]: time="2026-01-21T23:38:19.908334468Z" level=info msg="Start cni network conf syncer for default" Jan 21 23:38:19.909425 containerd[2113]: time="2026-01-21T23:38:19.908339972Z" level=info msg="Start streaming server" Jan 21 23:38:19.909425 containerd[2113]: time="2026-01-21T23:38:19.908347340Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 21 23:38:19.909425 containerd[2113]: time="2026-01-21T23:38:19.908352828Z" level=info msg="runtime interface starting up..." Jan 21 23:38:19.909425 containerd[2113]: time="2026-01-21T23:38:19.908357268Z" level=info msg="starting plugins..." Jan 21 23:38:19.909425 containerd[2113]: time="2026-01-21T23:38:19.908367572Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 21 23:38:19.910002 containerd[2113]: time="2026-01-21T23:38:19.909973204Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 21 23:38:19.912241 containerd[2113]: time="2026-01-21T23:38:19.910123684Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 21 23:38:19.917548 containerd[2113]: time="2026-01-21T23:38:19.917490588Z" level=info msg="containerd successfully booted in 0.215749s" Jan 21 23:38:19.917632 systemd[1]: Started containerd.service - containerd container runtime. Jan 21 23:38:19.941157 systemd[1]: issuegen.service: Deactivated successfully. Jan 21 23:38:19.943374 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 21 23:38:19.953300 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 21 23:38:19.963593 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 21 23:38:19.978078 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 21 23:38:19.989368 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 21 23:38:19.997719 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 21 23:38:20.005722 systemd[1]: Reached target getty.target - Login Prompts. Jan 21 23:38:20.023857 tar[2108]: linux-arm64/README.md Jan 21 23:38:20.037078 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 21 23:38:20.311513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 23:38:20.316904 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 21 23:38:20.325369 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 23:38:20.328115 systemd[1]: Startup finished in 1.734s (kernel) + 10.621s (initrd) + 6.151s (userspace) = 18.507s. Jan 21 23:38:20.420449 login[2246]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:20.426388 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 21 23:38:20.427716 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 21 23:38:20.435458 systemd-logind[2082]: New session 1 of user core. Jan 21 23:38:20.454934 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 21 23:38:20.458280 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 21 23:38:20.471295 (systemd)[2266]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 21 23:38:20.477431 systemd-logind[2082]: New session c1 of user core. Jan 21 23:38:20.595692 systemd[2266]: Queued start job for default target default.target. Jan 21 23:38:20.600928 systemd[2266]: Created slice app.slice - User Application Slice. Jan 21 23:38:20.601102 systemd[2266]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 21 23:38:20.601175 systemd[2266]: Reached target paths.target - Paths. Jan 21 23:38:20.601225 systemd[2266]: Reached target timers.target - Timers. Jan 21 23:38:20.602312 systemd[2266]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 21 23:38:20.603692 systemd[2266]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 21 23:38:20.623231 systemd[2266]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 21 23:38:20.624359 systemd[2266]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 21 23:38:20.625273 systemd[2266]: Reached target sockets.target - Sockets. Jan 21 23:38:20.625386 systemd[2266]: Reached target basic.target - Basic System. Jan 21 23:38:20.625460 systemd[2266]: Reached target default.target - Main User Target. Jan 21 23:38:20.625536 systemd[2266]: Startup finished in 141ms. Jan 21 23:38:20.625634 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 21 23:38:20.630212 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 21 23:38:20.692686 waagent[2244]: 2026-01-21T23:38:20.692620Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 21 23:38:20.721632 waagent[2244]: 2026-01-21T23:38:20.720263Z INFO Daemon Daemon OS: flatcar 4515.1.0 Jan 21 23:38:20.727102 waagent[2244]: 2026-01-21T23:38:20.726549Z INFO Daemon Daemon Python: 3.11.13 Jan 21 23:38:20.732759 waagent[2244]: 2026-01-21T23:38:20.732685Z INFO Daemon Daemon Run daemon Jan 21 23:38:20.733919 login[2247]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:20.740136 waagent[2244]: 2026-01-21T23:38:20.740082Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4515.1.0' Jan 21 23:38:20.742429 systemd-logind[2082]: New session 2 of user core. Jan 21 23:38:20.746894 waagent[2244]: 2026-01-21T23:38:20.746846Z INFO Daemon Daemon Using waagent for provisioning Jan 21 23:38:20.750907 waagent[2244]: 2026-01-21T23:38:20.750867Z INFO Daemon Daemon Activate resource disk Jan 21 23:38:20.754617 waagent[2244]: 2026-01-21T23:38:20.754459Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 21 23:38:20.761238 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 21 23:38:20.764730 waagent[2244]: 2026-01-21T23:38:20.762759Z INFO Daemon Daemon Found device: None Jan 21 23:38:20.771745 waagent[2244]: 2026-01-21T23:38:20.766458Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 21 23:38:20.779362 waagent[2244]: 2026-01-21T23:38:20.778804Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 21 23:38:20.788485 waagent[2244]: 2026-01-21T23:38:20.788063Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 21 23:38:20.792506 waagent[2244]: 2026-01-21T23:38:20.792477Z INFO Daemon Daemon Running default provisioning handler Jan 21 23:38:20.808138 waagent[2244]: 2026-01-21T23:38:20.808090Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 21 23:38:20.821214 waagent[2244]: 2026-01-21T23:38:20.821167Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 21 23:38:20.829242 waagent[2244]: 2026-01-21T23:38:20.829197Z INFO Daemon Daemon cloud-init is enabled: False Jan 21 23:38:20.833163 waagent[2244]: 2026-01-21T23:38:20.832947Z INFO Daemon Daemon Copying ovf-env.xml Jan 21 23:38:20.873178 waagent[2244]: 2026-01-21T23:38:20.873017Z INFO Daemon Daemon Successfully mounted dvd Jan 21 23:38:20.888761 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 21 23:38:20.891723 waagent[2244]: 2026-01-21T23:38:20.891588Z INFO Daemon Daemon Detect protocol endpoint Jan 21 23:38:20.896306 waagent[2244]: 2026-01-21T23:38:20.895370Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 21 23:38:20.896359 kubelet[2256]: E0121 23:38:20.896269 2256 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 23:38:20.901394 waagent[2244]: 2026-01-21T23:38:20.901181Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 21 23:38:20.902498 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 23:38:20.902615 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 23:38:20.902917 systemd[1]: kubelet.service: Consumed 558ms CPU time, 258.3M memory peak. Jan 21 23:38:20.906162 waagent[2244]: 2026-01-21T23:38:20.906128Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 21 23:38:20.910195 waagent[2244]: 2026-01-21T23:38:20.910162Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 21 23:38:20.914378 waagent[2244]: 2026-01-21T23:38:20.914347Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 21 23:38:20.930891 waagent[2244]: 2026-01-21T23:38:20.930856Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 21 23:38:20.935887 waagent[2244]: 2026-01-21T23:38:20.935865Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 21 23:38:20.939850 waagent[2244]: 2026-01-21T23:38:20.939824Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 21 23:38:21.074082 waagent[2244]: 2026-01-21T23:38:21.072166Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 21 23:38:21.076991 waagent[2244]: 2026-01-21T23:38:21.076949Z INFO Daemon Daemon Forcing an update of the goal state. Jan 21 23:38:21.084209 waagent[2244]: 2026-01-21T23:38:21.084174Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 21 23:38:21.100709 waagent[2244]: 2026-01-21T23:38:21.100675Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 21 23:38:21.105187 waagent[2244]: 2026-01-21T23:38:21.105156Z INFO Daemon Jan 21 23:38:21.107316 waagent[2244]: 2026-01-21T23:38:21.107288Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 13e43038-e96b-46c7-9a0c-9c6bbbb00842 eTag: 14710367058916769587 source: Fabric] Jan 21 23:38:21.115646 waagent[2244]: 2026-01-21T23:38:21.115615Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 21 23:38:21.120851 waagent[2244]: 2026-01-21T23:38:21.120822Z INFO Daemon Jan 21 23:38:21.123022 waagent[2244]: 2026-01-21T23:38:21.122997Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 21 23:38:21.132079 waagent[2244]: 2026-01-21T23:38:21.132010Z INFO Daemon Daemon Downloading artifacts profile blob Jan 21 23:38:21.187062 waagent[2244]: 2026-01-21T23:38:21.186987Z INFO Daemon Downloaded certificate {'thumbprint': '18D89E640E14AB67F185A66AC92AA151119E1C18', 'hasPrivateKey': True} Jan 21 23:38:21.194374 waagent[2244]: 2026-01-21T23:38:21.194339Z INFO Daemon Fetch goal state completed Jan 21 23:38:21.203932 waagent[2244]: 2026-01-21T23:38:21.203902Z INFO Daemon Daemon Starting provisioning Jan 21 23:38:21.207618 waagent[2244]: 2026-01-21T23:38:21.207588Z INFO Daemon Daemon Handle ovf-env.xml. Jan 21 23:38:21.211046 waagent[2244]: 2026-01-21T23:38:21.211022Z INFO Daemon Daemon Set hostname [ci-4515.1.0-n-a0ba06055b] Jan 21 23:38:21.217297 waagent[2244]: 2026-01-21T23:38:21.217251Z INFO Daemon Daemon Publish hostname [ci-4515.1.0-n-a0ba06055b] Jan 21 23:38:21.221902 waagent[2244]: 2026-01-21T23:38:21.221868Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 21 23:38:21.226415 waagent[2244]: 2026-01-21T23:38:21.226387Z INFO Daemon Daemon Primary interface is [eth0] Jan 21 23:38:21.236315 systemd-networkd[1703]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 23:38:21.236327 systemd-networkd[1703]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Jan 21 23:38:21.236401 systemd-networkd[1703]: eth0: DHCP lease lost Jan 21 23:38:21.258060 waagent[2244]: 2026-01-21T23:38:21.257944Z INFO Daemon Daemon Create user account if not exists Jan 21 23:38:21.262368 waagent[2244]: 2026-01-21T23:38:21.262325Z INFO Daemon Daemon User core already exists, skip useradd Jan 21 23:38:21.266521 waagent[2244]: 2026-01-21T23:38:21.266489Z INFO Daemon Daemon Configure sudoer Jan 21 23:38:21.271092 systemd-networkd[1703]: eth0: DHCPv4 address 10.200.20.29/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 21 23:38:21.273398 waagent[2244]: 2026-01-21T23:38:21.273354Z INFO Daemon Daemon Configure sshd Jan 21 23:38:21.279530 waagent[2244]: 2026-01-21T23:38:21.279490Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 21 23:38:21.288676 waagent[2244]: 2026-01-21T23:38:21.288642Z INFO Daemon Daemon Deploy ssh public key. Jan 21 23:38:22.348829 waagent[2244]: 2026-01-21T23:38:22.345398Z INFO Daemon Daemon Provisioning complete Jan 21 23:38:22.359132 waagent[2244]: 2026-01-21T23:38:22.359100Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 21 23:38:22.363774 waagent[2244]: 2026-01-21T23:38:22.363743Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 21 23:38:22.371168 waagent[2244]: 2026-01-21T23:38:22.371141Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 21 23:38:22.469034 waagent[2323]: 2026-01-21T23:38:22.468967Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 21 23:38:22.470078 waagent[2323]: 2026-01-21T23:38:22.469505Z INFO ExtHandler ExtHandler OS: flatcar 4515.1.0 Jan 21 23:38:22.470078 waagent[2323]: 2026-01-21T23:38:22.469561Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 21 23:38:22.470078 waagent[2323]: 2026-01-21T23:38:22.469598Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jan 21 23:38:22.486657 waagent[2323]: 2026-01-21T23:38:22.486618Z INFO ExtHandler ExtHandler Distro: flatcar-4515.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 21 23:38:22.486894 waagent[2323]: 2026-01-21T23:38:22.486867Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 21 23:38:22.487006 waagent[2323]: 2026-01-21T23:38:22.486983Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 21 23:38:22.492814 waagent[2323]: 2026-01-21T23:38:22.492771Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 21 23:38:22.499078 waagent[2323]: 2026-01-21T23:38:22.497971Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 21 23:38:22.499078 waagent[2323]: 2026-01-21T23:38:22.498366Z INFO ExtHandler Jan 21 23:38:22.499078 waagent[2323]: 2026-01-21T23:38:22.498420Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: bd08538d-8461-47b3-b88f-735d97b661f5 eTag: 14710367058916769587 source: Fabric] Jan 21 23:38:22.499078 waagent[2323]: 2026-01-21T23:38:22.498619Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 21 23:38:22.499078 waagent[2323]: 2026-01-21T23:38:22.498992Z INFO ExtHandler Jan 21 23:38:22.499078 waagent[2323]: 2026-01-21T23:38:22.499031Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 21 23:38:22.502361 waagent[2323]: 2026-01-21T23:38:22.502336Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 21 23:38:22.552710 waagent[2323]: 2026-01-21T23:38:22.552639Z INFO ExtHandler Downloaded certificate {'thumbprint': '18D89E640E14AB67F185A66AC92AA151119E1C18', 'hasPrivateKey': True} Jan 21 23:38:22.553121 waagent[2323]: 2026-01-21T23:38:22.553087Z INFO ExtHandler Fetch goal state completed Jan 21 23:38:22.565913 waagent[2323]: 2026-01-21T23:38:22.565860Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.3 30 Sep 2025 (Library: OpenSSL 3.4.3 30 Sep 2025) Jan 21 23:38:22.569232 waagent[2323]: 2026-01-21T23:38:22.569188Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2323 Jan 21 23:38:22.569332 waagent[2323]: 2026-01-21T23:38:22.569307Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 21 23:38:22.569573 waagent[2323]: 2026-01-21T23:38:22.569547Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 21 23:38:22.570661 waagent[2323]: 2026-01-21T23:38:22.570627Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 21 23:38:22.570975 waagent[2323]: 2026-01-21T23:38:22.570946Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 21 23:38:22.571115 waagent[2323]: 2026-01-21T23:38:22.571090Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 21 23:38:22.571541 waagent[2323]: 2026-01-21T23:38:22.571512Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 21 23:38:22.584712 waagent[2323]: 2026-01-21T23:38:22.584683Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 21 23:38:22.584848 waagent[2323]: 2026-01-21T23:38:22.584821Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 21 23:38:22.589419 waagent[2323]: 2026-01-21T23:38:22.589394Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 21 23:38:22.594049 systemd[1]: Reload requested from client PID 2338 ('systemctl') (unit waagent.service)... Jan 21 23:38:22.594284 systemd[1]: Reloading... Jan 21 23:38:22.660126 zram_generator::config[2383]: No configuration found. Jan 21 23:38:22.812403 systemd[1]: Reloading finished in 217 ms. Jan 21 23:38:22.837199 waagent[2323]: 2026-01-21T23:38:22.836260Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 21 23:38:22.837199 waagent[2323]: 2026-01-21T23:38:22.836401Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 21 23:38:22.904218 waagent[2323]: 2026-01-21T23:38:22.904154Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 21 23:38:22.904618 waagent[2323]: 2026-01-21T23:38:22.904582Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 21 23:38:22.905371 waagent[2323]: 2026-01-21T23:38:22.905321Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 21 23:38:22.905464 waagent[2323]: 2026-01-21T23:38:22.905428Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 21 23:38:22.905522 waagent[2323]: 2026-01-21T23:38:22.905500Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 21 23:38:22.905853 waagent[2323]: 2026-01-21T23:38:22.905667Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 21 23:38:22.906009 waagent[2323]: 2026-01-21T23:38:22.905970Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 21 23:38:22.906349 waagent[2323]: 2026-01-21T23:38:22.906310Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 21 23:38:22.906492 waagent[2323]: 2026-01-21T23:38:22.906433Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 21 23:38:22.906558 waagent[2323]: 2026-01-21T23:38:22.906531Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 21 23:38:22.906599 waagent[2323]: 2026-01-21T23:38:22.906581Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 21 23:38:22.906698 waagent[2323]: 2026-01-21T23:38:22.906672Z INFO EnvHandler ExtHandler Configure routes Jan 21 23:38:22.906735 waagent[2323]: 2026-01-21T23:38:22.906719Z INFO EnvHandler ExtHandler Gateway:None Jan 21 23:38:22.906764 waagent[2323]: 2026-01-21T23:38:22.906748Z INFO EnvHandler ExtHandler Routes:None Jan 21 23:38:22.907114 waagent[2323]: 2026-01-21T23:38:22.907077Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 21 23:38:22.907174 waagent[2323]: 2026-01-21T23:38:22.907154Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 21 23:38:22.907290 waagent[2323]: 2026-01-21T23:38:22.907235Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 21 23:38:22.907327 waagent[2323]: 2026-01-21T23:38:22.907296Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 21 23:38:22.907327 waagent[2323]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 21 23:38:22.907327 waagent[2323]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 21 23:38:22.907327 waagent[2323]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 21 23:38:22.907327 waagent[2323]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 21 23:38:22.907327 waagent[2323]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 21 23:38:22.907327 waagent[2323]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 21 23:38:22.913094 waagent[2323]: 2026-01-21T23:38:22.912772Z INFO ExtHandler ExtHandler Jan 21 23:38:22.913094 waagent[2323]: 2026-01-21T23:38:22.912830Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 752f484a-384f-4f0a-b879-8bdbfd6fae9d correlation 6696ebfd-bb76-4609-a952-807b2fbc55db created: 2026-01-21T23:37:47.560425Z] Jan 21 23:38:22.913470 waagent[2323]: 2026-01-21T23:38:22.913439Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 21 23:38:22.914229 waagent[2323]: 2026-01-21T23:38:22.914199Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 21 23:38:22.929455 waagent[2323]: 2026-01-21T23:38:22.929407Z INFO MonitorHandler ExtHandler Network interfaces: Jan 21 23:38:22.929455 waagent[2323]: Executing ['ip', '-a', '-o', 'link']: Jan 21 23:38:22.929455 waagent[2323]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 21 23:38:22.929455 waagent[2323]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:b8:82:8d brd ff:ff:ff:ff:ff:ff\ altname enx7ced8db8828d Jan 21 23:38:22.929455 waagent[2323]: 3: enP12360s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:b8:82:8d brd ff:ff:ff:ff:ff:ff\ altname enP12360p0s2 Jan 21 23:38:22.929455 waagent[2323]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 21 23:38:22.929455 waagent[2323]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 21 23:38:22.929455 waagent[2323]: 2: eth0 inet 10.200.20.29/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 21 23:38:22.929455 waagent[2323]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 21 23:38:22.929455 waagent[2323]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 21 23:38:22.929455 waagent[2323]: 2: eth0 inet6 fe80::7eed:8dff:feb8:828d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 21 23:38:22.937172 waagent[2323]: 2026-01-21T23:38:22.937124Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 21 23:38:22.937172 waagent[2323]: Try `iptables -h' or 'iptables --help' for more information.) Jan 21 23:38:22.937455 waagent[2323]: 2026-01-21T23:38:22.937423Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 9E16F695-32EA-42B9-BDDE-6DC0516D80E2;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 21 23:38:22.955794 waagent[2323]: 2026-01-21T23:38:22.955745Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 21 23:38:22.955794 waagent[2323]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 21 23:38:22.955794 waagent[2323]: pkts bytes target prot opt in out source destination Jan 21 23:38:22.955794 waagent[2323]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 21 23:38:22.955794 waagent[2323]: pkts bytes target prot opt in out source destination Jan 21 23:38:22.955794 waagent[2323]: Chain OUTPUT (policy ACCEPT 3 packets, 349 bytes) Jan 21 23:38:22.955794 waagent[2323]: pkts bytes target prot opt in out source destination Jan 21 23:38:22.955794 waagent[2323]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 21 23:38:22.955794 waagent[2323]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 21 23:38:22.955794 waagent[2323]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 21 23:38:22.958069 waagent[2323]: 2026-01-21T23:38:22.958021Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 21 23:38:22.958069 waagent[2323]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 21 23:38:22.958069 waagent[2323]: pkts bytes target prot opt in out source destination Jan 21 23:38:22.958069 waagent[2323]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 21 23:38:22.958069 waagent[2323]: pkts bytes target prot opt in out source destination Jan 21 23:38:22.958069 waagent[2323]: Chain OUTPUT (policy ACCEPT 3 packets, 349 bytes) Jan 21 23:38:22.958069 waagent[2323]: pkts bytes target prot opt in out source destination Jan 21 23:38:22.958069 waagent[2323]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 21 23:38:22.958069 waagent[2323]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 21 23:38:22.958069 waagent[2323]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 21 23:38:22.958253 waagent[2323]: 2026-01-21T23:38:22.958227Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 21 23:38:31.153401 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 21 23:38:31.155221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 23:38:31.256698 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 23:38:31.264412 (kubelet)[2474]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 23:38:31.373947 kubelet[2474]: E0121 23:38:31.373902 2474 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 23:38:31.376866 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 23:38:31.376985 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 23:38:31.379131 systemd[1]: kubelet.service: Consumed 109ms CPU time, 105.5M memory peak. Jan 21 23:38:33.731821 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 21 23:38:33.733897 systemd[1]: Started sshd@0-10.200.20.29:22-10.200.16.10:37904.service - OpenSSH per-connection server daemon (10.200.16.10:37904). Jan 21 23:38:34.193831 sshd[2482]: Accepted publickey for core from 10.200.16.10 port 37904 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:38:34.194551 sshd-session[2482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:34.198332 systemd-logind[2082]: New session 3 of user core. Jan 21 23:38:34.206184 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 21 23:38:34.511852 systemd[1]: Started sshd@1-10.200.20.29:22-10.200.16.10:37918.service - OpenSSH per-connection server daemon (10.200.16.10:37918). Jan 21 23:38:34.932116 sshd[2488]: Accepted publickey for core from 10.200.16.10 port 37918 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:38:34.933136 sshd-session[2488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:34.936662 systemd-logind[2082]: New session 4 of user core. Jan 21 23:38:34.943178 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 21 23:38:35.166685 sshd[2491]: Connection closed by 10.200.16.10 port 37918 Jan 21 23:38:35.168276 sshd-session[2488]: pam_unix(sshd:session): session closed for user core Jan 21 23:38:35.171352 systemd[1]: sshd@1-10.200.20.29:22-10.200.16.10:37918.service: Deactivated successfully. Jan 21 23:38:35.172906 systemd[1]: session-4.scope: Deactivated successfully. Jan 21 23:38:35.173672 systemd-logind[2082]: Session 4 logged out. Waiting for processes to exit. Jan 21 23:38:35.174809 systemd-logind[2082]: Removed session 4. Jan 21 23:38:35.256839 systemd[1]: Started sshd@2-10.200.20.29:22-10.200.16.10:37928.service - OpenSSH per-connection server daemon (10.200.16.10:37928). Jan 21 23:38:35.674173 sshd[2497]: Accepted publickey for core from 10.200.16.10 port 37928 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:38:35.675217 sshd-session[2497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:35.679168 systemd-logind[2082]: New session 5 of user core. Jan 21 23:38:35.689194 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 21 23:38:35.904511 sshd[2500]: Connection closed by 10.200.16.10 port 37928 Jan 21 23:38:35.905068 sshd-session[2497]: pam_unix(sshd:session): session closed for user core Jan 21 23:38:35.908718 systemd[1]: sshd@2-10.200.20.29:22-10.200.16.10:37928.service: Deactivated successfully. Jan 21 23:38:35.911152 systemd[1]: session-5.scope: Deactivated successfully. Jan 21 23:38:35.912133 systemd-logind[2082]: Session 5 logged out. Waiting for processes to exit. Jan 21 23:38:35.913692 systemd-logind[2082]: Removed session 5. Jan 21 23:38:35.992007 systemd[1]: Started sshd@3-10.200.20.29:22-10.200.16.10:37938.service - OpenSSH per-connection server daemon (10.200.16.10:37938). Jan 21 23:38:36.407206 sshd[2506]: Accepted publickey for core from 10.200.16.10 port 37938 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:38:36.407937 sshd-session[2506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:36.412442 systemd-logind[2082]: New session 6 of user core. Jan 21 23:38:36.422258 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 21 23:38:36.638436 sshd[2509]: Connection closed by 10.200.16.10 port 37938 Jan 21 23:38:36.638988 sshd-session[2506]: pam_unix(sshd:session): session closed for user core Jan 21 23:38:36.642268 systemd[1]: sshd@3-10.200.20.29:22-10.200.16.10:37938.service: Deactivated successfully. Jan 21 23:38:36.643823 systemd[1]: session-6.scope: Deactivated successfully. Jan 21 23:38:36.645090 systemd-logind[2082]: Session 6 logged out. Waiting for processes to exit. Jan 21 23:38:36.645802 systemd-logind[2082]: Removed session 6. Jan 21 23:38:36.738107 systemd[1]: Started sshd@4-10.200.20.29:22-10.200.16.10:37940.service - OpenSSH per-connection server daemon (10.200.16.10:37940). Jan 21 23:38:37.156877 sshd[2515]: Accepted publickey for core from 10.200.16.10 port 37940 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:38:37.157586 sshd-session[2515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:37.162227 systemd-logind[2082]: New session 7 of user core. Jan 21 23:38:37.171236 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 21 23:38:37.348113 sudo[2519]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 21 23:38:37.348317 sudo[2519]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 23:38:37.362546 sudo[2519]: pam_unix(sudo:session): session closed for user root Jan 21 23:38:37.441074 sshd[2518]: Connection closed by 10.200.16.10 port 37940 Jan 21 23:38:37.440005 sshd-session[2515]: pam_unix(sshd:session): session closed for user core Jan 21 23:38:37.444645 systemd[1]: sshd@4-10.200.20.29:22-10.200.16.10:37940.service: Deactivated successfully. Jan 21 23:38:37.446487 systemd[1]: session-7.scope: Deactivated successfully. Jan 21 23:38:37.447418 systemd-logind[2082]: Session 7 logged out. Waiting for processes to exit. Jan 21 23:38:37.448947 systemd-logind[2082]: Removed session 7. Jan 21 23:38:37.522185 systemd[1]: Started sshd@5-10.200.20.29:22-10.200.16.10:37944.service - OpenSSH per-connection server daemon (10.200.16.10:37944). Jan 21 23:38:37.909282 sshd[2525]: Accepted publickey for core from 10.200.16.10 port 37944 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:38:37.910208 sshd-session[2525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:37.913870 systemd-logind[2082]: New session 8 of user core. Jan 21 23:38:37.924187 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 21 23:38:38.054829 sudo[2530]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 21 23:38:38.055035 sudo[2530]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 23:38:38.060963 sudo[2530]: pam_unix(sudo:session): session closed for user root Jan 21 23:38:38.065144 sudo[2529]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 21 23:38:38.065329 sudo[2529]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 23:38:38.072859 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 21 23:38:38.098000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 21 23:38:38.101179 augenrules[2552]: No rules Jan 21 23:38:38.102959 kernel: kauditd_printk_skb: 144 callbacks suppressed Jan 21 23:38:38.103001 kernel: audit: type=1305 audit(1769038718.098:244): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 21 23:38:38.110813 systemd[1]: audit-rules.service: Deactivated successfully. Jan 21 23:38:38.111010 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 21 23:38:38.098000 audit[2552]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffed19c20 a2=420 a3=0 items=0 ppid=2533 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:38.117601 sudo[2529]: pam_unix(sudo:session): session closed for user root Jan 21 23:38:38.127065 kernel: audit: type=1300 audit(1769038718.098:244): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffed19c20 a2=420 a3=0 items=0 ppid=2533 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:38.098000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 21 23:38:38.134554 kernel: audit: type=1327 audit(1769038718.098:244): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 21 23:38:38.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:38.145988 kernel: audit: type=1130 audit(1769038718.113:245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:38.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:38.157222 kernel: audit: type=1131 audit(1769038718.113:246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:38.116000 audit[2529]: USER_END pid=2529 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:38:38.170383 kernel: audit: type=1106 audit(1769038718.116:247): pid=2529 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:38:38.116000 audit[2529]: CRED_DISP pid=2529 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:38:38.182971 kernel: audit: type=1104 audit(1769038718.116:248): pid=2529 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:38:38.197097 sshd[2528]: Connection closed by 10.200.16.10 port 37944 Jan 21 23:38:38.197462 sshd-session[2525]: pam_unix(sshd:session): session closed for user core Jan 21 23:38:38.197000 audit[2525]: USER_END pid=2525 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:38:38.197000 audit[2525]: CRED_DISP pid=2525 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:38:38.230406 kernel: audit: type=1106 audit(1769038718.197:249): pid=2525 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:38:38.230485 kernel: audit: type=1104 audit(1769038718.197:250): pid=2525 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:38:38.230833 systemd[1]: sshd@5-10.200.20.29:22-10.200.16.10:37944.service: Deactivated successfully. Jan 21 23:38:38.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.29:22-10.200.16.10:37944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:38.233036 systemd[1]: session-8.scope: Deactivated successfully. Jan 21 23:38:38.244821 systemd-logind[2082]: Session 8 logged out. Waiting for processes to exit. Jan 21 23:38:38.246218 kernel: audit: type=1131 audit(1769038718.230:251): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.29:22-10.200.16.10:37944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:38.246558 systemd-logind[2082]: Removed session 8. Jan 21 23:38:38.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.29:22-10.200.16.10:37946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:38.303866 systemd[1]: Started sshd@6-10.200.20.29:22-10.200.16.10:37946.service - OpenSSH per-connection server daemon (10.200.16.10:37946). Jan 21 23:38:38.726000 audit[2561]: USER_ACCT pid=2561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:38:38.727722 sshd[2561]: Accepted publickey for core from 10.200.16.10 port 37946 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:38:38.727000 audit[2561]: CRED_ACQ pid=2561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:38:38.727000 audit[2561]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffed171690 a2=3 a3=0 items=0 ppid=1 pid=2561 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:38.727000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:38.729123 sshd-session[2561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:38.732686 systemd-logind[2082]: New session 9 of user core. Jan 21 23:38:38.740385 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 21 23:38:38.741000 audit[2561]: USER_START pid=2561 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:38:38.742000 audit[2564]: CRED_ACQ pid=2564 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:38:38.887973 sudo[2565]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 21 23:38:38.886000 audit[2565]: USER_ACCT pid=2565 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:38:38.887000 audit[2565]: CRED_REFR pid=2565 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:38:38.888215 sudo[2565]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 23:38:38.888000 audit[2565]: USER_START pid=2565 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:38:40.366955 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 21 23:38:40.376289 (dockerd)[2582]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 21 23:38:40.653110 dockerd[2582]: time="2026-01-21T23:38:40.651337732Z" level=info msg="Starting up" Jan 21 23:38:40.656128 dockerd[2582]: time="2026-01-21T23:38:40.656097932Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 21 23:38:40.663889 dockerd[2582]: time="2026-01-21T23:38:40.663847916Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 21 23:38:40.753947 dockerd[2582]: time="2026-01-21T23:38:40.753903468Z" level=info msg="Loading containers: start." Jan 21 23:38:40.771068 kernel: Initializing XFRM netlink socket Jan 21 23:38:40.793000 audit[2628]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2628 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.793000 audit[2628]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe0788ee0 a2=0 a3=0 items=0 ppid=2582 pid=2628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.793000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 21 23:38:40.795000 audit[2630]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2630 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.795000 audit[2630]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc17389f0 a2=0 a3=0 items=0 ppid=2582 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.795000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 21 23:38:40.797000 audit[2632]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2632 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.797000 audit[2632]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd8cf27e0 a2=0 a3=0 items=0 ppid=2582 pid=2632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.797000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 21 23:38:40.798000 audit[2634]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2634 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.798000 audit[2634]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe649d210 a2=0 a3=0 items=0 ppid=2582 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.798000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 21 23:38:40.800000 audit[2636]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=2636 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.800000 audit[2636]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe5256ee0 a2=0 a3=0 items=0 ppid=2582 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.800000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 21 23:38:40.801000 audit[2638]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=2638 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.801000 audit[2638]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd80ba9a0 a2=0 a3=0 items=0 ppid=2582 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.801000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 23:38:40.803000 audit[2640]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2640 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.803000 audit[2640]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe5600b00 a2=0 a3=0 items=0 ppid=2582 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.803000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 21 23:38:40.805000 audit[2642]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=2642 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.805000 audit[2642]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffdb6e39e0 a2=0 a3=0 items=0 ppid=2582 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.805000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 21 23:38:40.831000 audit[2645]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2645 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.831000 audit[2645]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=fffffe9fc180 a2=0 a3=0 items=0 ppid=2582 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.831000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 21 23:38:40.833000 audit[2647]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=2647 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.833000 audit[2647]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd1072430 a2=0 a3=0 items=0 ppid=2582 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.833000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 21 23:38:40.834000 audit[2649]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2649 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.834000 audit[2649]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffff3c543c0 a2=0 a3=0 items=0 ppid=2582 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.834000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 21 23:38:40.836000 audit[2651]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2651 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.836000 audit[2651]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd10e65b0 a2=0 a3=0 items=0 ppid=2582 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.836000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 23:38:40.837000 audit[2653]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=2653 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.837000 audit[2653]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffffa1de4c0 a2=0 a3=0 items=0 ppid=2582 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.837000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 21 23:38:40.872000 audit[2683]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=2683 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:38:40.872000 audit[2683]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffcb7003e0 a2=0 a3=0 items=0 ppid=2582 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.872000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 21 23:38:40.874000 audit[2685]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=2685 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:38:40.874000 audit[2685]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff62ed5f0 a2=0 a3=0 items=0 ppid=2582 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.874000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 21 23:38:40.875000 audit[2687]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2687 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:38:40.875000 audit[2687]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc8f6b7b0 a2=0 a3=0 items=0 ppid=2582 pid=2687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.875000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 21 23:38:40.877000 audit[2689]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2689 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:38:40.877000 audit[2689]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd8219080 a2=0 a3=0 items=0 ppid=2582 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.877000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 21 23:38:40.878000 audit[2691]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=2691 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:38:40.878000 audit[2691]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffebe7d730 a2=0 a3=0 items=0 ppid=2582 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.878000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 21 23:38:40.879000 audit[2693]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=2693 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:38:40.879000 audit[2693]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff1d58960 a2=0 a3=0 items=0 ppid=2582 pid=2693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.879000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 23:38:40.881000 audit[2695]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=2695 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:38:40.881000 audit[2695]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffcfbd29c0 a2=0 a3=0 items=0 ppid=2582 pid=2695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.881000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 21 23:38:40.882000 audit[2697]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=2697 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:38:40.882000 audit[2697]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffdcdf0980 a2=0 a3=0 items=0 ppid=2582 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.882000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 21 23:38:40.884000 audit[2699]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=2699 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:38:40.884000 audit[2699]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffff4eebc0 a2=0 a3=0 items=0 ppid=2582 pid=2699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.884000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 21 23:38:40.886000 audit[2701]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=2701 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:38:40.886000 audit[2701]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff4c2d2e0 a2=0 a3=0 items=0 ppid=2582 pid=2701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.886000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 21 23:38:40.887000 audit[2703]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=2703 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:38:40.887000 audit[2703]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffff80b0000 a2=0 a3=0 items=0 ppid=2582 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.887000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 21 23:38:40.889000 audit[2705]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=2705 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:38:40.889000 audit[2705]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc590c360 a2=0 a3=0 items=0 ppid=2582 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.889000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 23:38:40.890000 audit[2707]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=2707 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:38:40.890000 audit[2707]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffd4397260 a2=0 a3=0 items=0 ppid=2582 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.890000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 21 23:38:40.894000 audit[2712]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=2712 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.894000 audit[2712]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe540b2e0 a2=0 a3=0 items=0 ppid=2582 pid=2712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.894000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 21 23:38:40.895000 audit[2714]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2714 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.895000 audit[2714]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd19d9970 a2=0 a3=0 items=0 ppid=2582 pid=2714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.895000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 21 23:38:40.897000 audit[2716]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2716 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.897000 audit[2716]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff0f51340 a2=0 a3=0 items=0 ppid=2582 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.897000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 21 23:38:40.898000 audit[2718]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2718 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:38:40.898000 audit[2718]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe809b7f0 a2=0 a3=0 items=0 ppid=2582 pid=2718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.898000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 21 23:38:40.900000 audit[2720]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=2720 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:38:40.900000 audit[2720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff7f785e0 a2=0 a3=0 items=0 ppid=2582 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.900000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 21 23:38:40.901000 audit[2722]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=2722 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:38:40.901000 audit[2722]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc8572080 a2=0 a3=0 items=0 ppid=2582 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.901000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 21 23:38:40.931000 audit[2727]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=2727 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.931000 audit[2727]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffc6055730 a2=0 a3=0 items=0 ppid=2582 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.931000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 21 23:38:40.934000 audit[2729]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=2729 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.934000 audit[2729]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffcae93150 a2=0 a3=0 items=0 ppid=2582 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.934000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 21 23:38:40.940000 audit[2737]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2737 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.940000 audit[2737]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffcf5f6ab0 a2=0 a3=0 items=0 ppid=2582 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.940000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 21 23:38:40.944000 audit[2742]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2742 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.944000 audit[2742]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd4f3dc50 a2=0 a3=0 items=0 ppid=2582 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.944000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 21 23:38:40.946000 audit[2744]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2744 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.946000 audit[2744]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffc3649ff0 a2=0 a3=0 items=0 ppid=2582 pid=2744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.946000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 21 23:38:40.947000 audit[2746]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2746 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.947000 audit[2746]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffec257a60 a2=0 a3=0 items=0 ppid=2582 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.947000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 21 23:38:40.949000 audit[2748]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=2748 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.949000 audit[2748]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffe3772980 a2=0 a3=0 items=0 ppid=2582 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.949000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 21 23:38:40.950000 audit[2750]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=2750 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:40.950000 audit[2750]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc9196210 a2=0 a3=0 items=0 ppid=2582 pid=2750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:40.950000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 21 23:38:40.952499 systemd-networkd[1703]: docker0: Link UP Jan 21 23:38:40.988304 dockerd[2582]: time="2026-01-21T23:38:40.988072908Z" level=info msg="Loading containers: done." Jan 21 23:38:41.058442 dockerd[2582]: time="2026-01-21T23:38:41.058390356Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 21 23:38:41.058612 dockerd[2582]: time="2026-01-21T23:38:41.058515588Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 21 23:38:41.058632 dockerd[2582]: time="2026-01-21T23:38:41.058611316Z" level=info msg="Initializing buildkit" Jan 21 23:38:41.098778 dockerd[2582]: time="2026-01-21T23:38:41.098735012Z" level=info msg="Completed buildkit initialization" Jan 21 23:38:41.103610 dockerd[2582]: time="2026-01-21T23:38:41.103569092Z" level=info msg="Daemon has completed initialization" Jan 21 23:38:41.103704 dockerd[2582]: time="2026-01-21T23:38:41.103619956Z" level=info msg="API listen on /run/docker.sock" Jan 21 23:38:41.104094 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 21 23:38:41.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:41.627397 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 21 23:38:41.628699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 23:38:41.687588 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4271325623-merged.mount: Deactivated successfully. Jan 21 23:38:41.738277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 23:38:41.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:41.747530 (kubelet)[2797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 23:38:41.865102 containerd[2113]: time="2026-01-21T23:38:41.864664012Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 21 23:38:41.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 23:38:42.143320 kubelet[2797]: E0121 23:38:41.870409 2797 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 23:38:41.872445 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 23:38:41.872569 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 23:38:41.873154 systemd[1]: kubelet.service: Consumed 105ms CPU time, 106.3M memory peak. Jan 21 23:38:43.047058 chronyd[2050]: Selected source PHC0 Jan 21 23:38:43.106898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4087693841.mount: Deactivated successfully. Jan 21 23:38:44.039968 containerd[2113]: time="2026-01-21T23:38:44.039912282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:38:44.045213 containerd[2113]: time="2026-01-21T23:38:44.045159411Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=25791262" Jan 21 23:38:44.049317 containerd[2113]: time="2026-01-21T23:38:44.049060339Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:38:44.053051 containerd[2113]: time="2026-01-21T23:38:44.053013522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:38:44.053616 containerd[2113]: time="2026-01-21T23:38:44.053587434Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 2.188888694s" Jan 21 23:38:44.053665 containerd[2113]: time="2026-01-21T23:38:44.053619994Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 21 23:38:44.055144 containerd[2113]: time="2026-01-21T23:38:44.055123170Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 21 23:38:45.600071 containerd[2113]: time="2026-01-21T23:38:45.599620525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:38:45.601994 containerd[2113]: time="2026-01-21T23:38:45.601971085Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23544927" Jan 21 23:38:45.604474 containerd[2113]: time="2026-01-21T23:38:45.604436629Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:38:45.611303 containerd[2113]: time="2026-01-21T23:38:45.610696861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:38:45.611303 containerd[2113]: time="2026-01-21T23:38:45.611179749Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.555976755s" Jan 21 23:38:45.611303 containerd[2113]: time="2026-01-21T23:38:45.611201397Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 21 23:38:45.611718 containerd[2113]: time="2026-01-21T23:38:45.611682205Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 21 23:38:46.818567 containerd[2113]: time="2026-01-21T23:38:46.818513909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:38:46.827557 containerd[2113]: time="2026-01-21T23:38:46.827512909Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18292683" Jan 21 23:38:46.830315 containerd[2113]: time="2026-01-21T23:38:46.830272453Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:38:46.835131 containerd[2113]: time="2026-01-21T23:38:46.835067141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:38:46.835529 containerd[2113]: time="2026-01-21T23:38:46.835388213Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.223530368s" Jan 21 23:38:46.835529 containerd[2113]: time="2026-01-21T23:38:46.835416325Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 21 23:38:46.835912 containerd[2113]: time="2026-01-21T23:38:46.835883709Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 21 23:38:48.324164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2481416651.mount: Deactivated successfully. Jan 21 23:38:48.563530 containerd[2113]: time="2026-01-21T23:38:48.563479645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:38:48.566040 containerd[2113]: time="2026-01-21T23:38:48.565899661Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28254952" Jan 21 23:38:48.568538 containerd[2113]: time="2026-01-21T23:38:48.568511901Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:38:48.571740 containerd[2113]: time="2026-01-21T23:38:48.571697821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:38:48.572066 containerd[2113]: time="2026-01-21T23:38:48.571971901Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.735878896s" Jan 21 23:38:48.572066 containerd[2113]: time="2026-01-21T23:38:48.571998653Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 21 23:38:48.572696 containerd[2113]: time="2026-01-21T23:38:48.572634965Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 21 23:38:49.478550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4000514119.mount: Deactivated successfully. Jan 21 23:38:50.101662 containerd[2113]: time="2026-01-21T23:38:50.101601269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:38:50.106486 containerd[2113]: time="2026-01-21T23:38:50.106253293Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=18338344" Jan 21 23:38:50.109061 containerd[2113]: time="2026-01-21T23:38:50.109033645Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:38:50.113789 containerd[2113]: time="2026-01-21T23:38:50.113750517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:38:50.114170 containerd[2113]: time="2026-01-21T23:38:50.114144317Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.541366304s" Jan 21 23:38:50.114368 containerd[2113]: time="2026-01-21T23:38:50.114217077Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 21 23:38:50.114716 containerd[2113]: time="2026-01-21T23:38:50.114700053Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 21 23:38:50.642315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1509568153.mount: Deactivated successfully. Jan 21 23:38:50.659085 containerd[2113]: time="2026-01-21T23:38:50.658614797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 23:38:50.661636 containerd[2113]: time="2026-01-21T23:38:50.661597757Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 21 23:38:50.665636 containerd[2113]: time="2026-01-21T23:38:50.665611581Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 23:38:50.669561 containerd[2113]: time="2026-01-21T23:38:50.669537141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 23:38:50.669865 containerd[2113]: time="2026-01-21T23:38:50.669840933Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 555.056ms" Jan 21 23:38:50.669912 containerd[2113]: time="2026-01-21T23:38:50.669867893Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 21 23:38:50.670447 containerd[2113]: time="2026-01-21T23:38:50.670392381Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 21 23:38:51.246107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount234900215.mount: Deactivated successfully. Jan 21 23:38:52.122942 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 21 23:38:52.124381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 23:38:52.228040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 23:38:52.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:52.245438 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 21 23:38:52.245516 kernel: audit: type=1130 audit(1769038732.228:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:52.246783 (kubelet)[2991]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 23:38:52.271856 kubelet[2991]: E0121 23:38:52.271804 2991 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 23:38:52.274074 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 23:38:52.274187 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 23:38:52.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 23:38:52.274520 systemd[1]: kubelet.service: Consumed 102ms CPU time, 104.7M memory peak. Jan 21 23:38:52.288066 kernel: audit: type=1131 audit(1769038732.273:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 23:38:53.834340 containerd[2113]: time="2026-01-21T23:38:53.833637458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:38:53.835985 containerd[2113]: time="2026-01-21T23:38:53.835943201Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69221029" Jan 21 23:38:53.838655 containerd[2113]: time="2026-01-21T23:38:53.838610418Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:38:53.842454 containerd[2113]: time="2026-01-21T23:38:53.842422806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:38:53.843039 containerd[2113]: time="2026-01-21T23:38:53.842864268Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.172444814s" Jan 21 23:38:53.843039 containerd[2113]: time="2026-01-21T23:38:53.842889164Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 21 23:38:56.456834 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 23:38:56.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:56.457160 systemd[1]: kubelet.service: Consumed 102ms CPU time, 104.7M memory peak. Jan 21 23:38:56.460549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 23:38:56.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:56.480704 kernel: audit: type=1130 audit(1769038736.456:306): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:56.480780 kernel: audit: type=1131 audit(1769038736.456:307): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:56.503270 systemd[1]: Reload requested from client PID 3032 ('systemctl') (unit session-9.scope)... Jan 21 23:38:56.503282 systemd[1]: Reloading... Jan 21 23:38:56.605108 zram_generator::config[3093]: No configuration found. Jan 21 23:38:56.737263 systemd[1]: Reloading finished in 233 ms. Jan 21 23:38:56.959133 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 21 23:38:56.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 23:38:56.959219 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 21 23:38:56.959766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 23:38:56.959842 systemd[1]: kubelet.service: Consumed 65ms CPU time, 89.5M memory peak. Jan 21 23:38:56.966313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 23:38:56.974059 kernel: audit: type=1130 audit(1769038736.958:308): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 23:38:56.973000 audit: BPF prog-id=87 op=LOAD Jan 21 23:38:56.979155 kernel: audit: type=1334 audit(1769038736.973:309): prog-id=87 op=LOAD Jan 21 23:38:56.979000 audit: BPF prog-id=67 op=UNLOAD Jan 21 23:38:56.989368 kernel: audit: type=1334 audit(1769038736.979:310): prog-id=67 op=UNLOAD Jan 21 23:38:56.989449 kernel: audit: type=1334 audit(1769038736.984:311): prog-id=88 op=LOAD Jan 21 23:38:56.984000 audit: BPF prog-id=88 op=LOAD Jan 21 23:38:56.993369 kernel: audit: type=1334 audit(1769038736.984:312): prog-id=73 op=UNLOAD Jan 21 23:38:56.984000 audit: BPF prog-id=73 op=UNLOAD Jan 21 23:38:56.992000 audit: BPF prog-id=89 op=LOAD Jan 21 23:38:56.997382 kernel: audit: type=1334 audit(1769038736.992:313): prog-id=89 op=LOAD Jan 21 23:38:56.996000 audit: BPF prog-id=90 op=LOAD Jan 21 23:38:56.996000 audit: BPF prog-id=74 op=UNLOAD Jan 21 23:38:56.996000 audit: BPF prog-id=75 op=UNLOAD Jan 21 23:38:56.997000 audit: BPF prog-id=91 op=LOAD Jan 21 23:38:56.997000 audit: BPF prog-id=92 op=LOAD Jan 21 23:38:56.997000 audit: BPF prog-id=71 op=UNLOAD Jan 21 23:38:56.997000 audit: BPF prog-id=72 op=UNLOAD Jan 21 23:38:56.997000 audit: BPF prog-id=93 op=LOAD Jan 21 23:38:56.997000 audit: BPF prog-id=80 op=UNLOAD Jan 21 23:38:56.998000 audit: BPF prog-id=94 op=LOAD Jan 21 23:38:56.998000 audit: BPF prog-id=84 op=UNLOAD Jan 21 23:38:56.998000 audit: BPF prog-id=95 op=LOAD Jan 21 23:38:56.998000 audit: BPF prog-id=96 op=LOAD Jan 21 23:38:56.998000 audit: BPF prog-id=85 op=UNLOAD Jan 21 23:38:56.998000 audit: BPF prog-id=86 op=UNLOAD Jan 21 23:38:56.999000 audit: BPF prog-id=97 op=LOAD Jan 21 23:38:56.999000 audit: BPF prog-id=79 op=UNLOAD Jan 21 23:38:57.000000 audit: BPF prog-id=98 op=LOAD Jan 21 23:38:57.000000 audit: BPF prog-id=68 op=UNLOAD Jan 21 23:38:57.000000 audit: BPF prog-id=99 op=LOAD Jan 21 23:38:57.000000 audit: BPF prog-id=100 op=LOAD Jan 21 23:38:57.000000 audit: BPF prog-id=69 op=UNLOAD Jan 21 23:38:57.000000 audit: BPF prog-id=70 op=UNLOAD Jan 21 23:38:57.002000 audit: BPF prog-id=101 op=LOAD Jan 21 23:38:57.002000 audit: BPF prog-id=76 op=UNLOAD Jan 21 23:38:57.002000 audit: BPF prog-id=102 op=LOAD Jan 21 23:38:57.002000 audit: BPF prog-id=103 op=LOAD Jan 21 23:38:57.002000 audit: BPF prog-id=77 op=UNLOAD Jan 21 23:38:57.002000 audit: BPF prog-id=78 op=UNLOAD Jan 21 23:38:57.002000 audit: BPF prog-id=104 op=LOAD Jan 21 23:38:57.003000 audit: BPF prog-id=81 op=UNLOAD Jan 21 23:38:57.003000 audit: BPF prog-id=105 op=LOAD Jan 21 23:38:57.003000 audit: BPF prog-id=106 op=LOAD Jan 21 23:38:57.003000 audit: BPF prog-id=82 op=UNLOAD Jan 21 23:38:57.003000 audit: BPF prog-id=83 op=UNLOAD Jan 21 23:38:57.100035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 23:38:57.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:57.103771 (kubelet)[3146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 21 23:38:57.219466 kubelet[3146]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 23:38:57.219466 kubelet[3146]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 21 23:38:57.219466 kubelet[3146]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 23:38:57.219833 kubelet[3146]: I0121 23:38:57.219498 3146 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 23:38:57.802885 kubelet[3146]: I0121 23:38:57.802842 3146 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 21 23:38:57.803067 kubelet[3146]: I0121 23:38:57.803040 3146 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 23:38:57.803300 kubelet[3146]: I0121 23:38:57.803286 3146 server.go:956] "Client rotation is on, will bootstrap in background" Jan 21 23:38:57.822082 kubelet[3146]: E0121 23:38:57.822019 3146 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.29:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 21 23:38:57.823906 kubelet[3146]: I0121 23:38:57.823876 3146 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 21 23:38:57.832857 kubelet[3146]: I0121 23:38:57.832838 3146 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 23:38:57.835676 kubelet[3146]: I0121 23:38:57.835657 3146 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 21 23:38:57.836757 kubelet[3146]: I0121 23:38:57.836719 3146 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 23:38:57.836963 kubelet[3146]: I0121 23:38:57.836832 3146 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-n-a0ba06055b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 23:38:57.837118 kubelet[3146]: I0121 23:38:57.837104 3146 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 23:38:57.837180 kubelet[3146]: I0121 23:38:57.837173 3146 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 23:38:57.837360 kubelet[3146]: I0121 23:38:57.837345 3146 state_mem.go:36] "Initialized new in-memory state store" Jan 21 23:38:57.839802 kubelet[3146]: I0121 23:38:57.839783 3146 kubelet.go:480] "Attempting to sync node with API server" Jan 21 23:38:57.840005 kubelet[3146]: I0121 23:38:57.839989 3146 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 23:38:57.840118 kubelet[3146]: I0121 23:38:57.840108 3146 kubelet.go:386] "Adding apiserver pod source" Jan 21 23:38:57.840382 kubelet[3146]: I0121 23:38:57.840366 3146 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 23:38:57.844482 kubelet[3146]: E0121 23:38:57.844429 3146 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-n-a0ba06055b&limit=500&resourceVersion=0\": dial tcp 10.200.20.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 23:38:57.844772 kubelet[3146]: E0121 23:38:57.844740 3146 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 23:38:57.844819 kubelet[3146]: I0121 23:38:57.844809 3146 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 21 23:38:57.845644 kubelet[3146]: I0121 23:38:57.845189 3146 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 21 23:38:57.845644 kubelet[3146]: W0121 23:38:57.845230 3146 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 21 23:38:57.847587 kubelet[3146]: I0121 23:38:57.847572 3146 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 21 23:38:57.847683 kubelet[3146]: I0121 23:38:57.847675 3146 server.go:1289] "Started kubelet" Jan 21 23:38:57.848824 kubelet[3146]: I0121 23:38:57.848805 3146 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 23:38:57.851140 kubelet[3146]: E0121 23:38:57.850303 3146 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.29:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.29:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515.1.0-n-a0ba06055b.188ce35bfb5081b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515.1.0-n-a0ba06055b,UID:ci-4515.1.0-n-a0ba06055b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515.1.0-n-a0ba06055b,},FirstTimestamp:2026-01-21 23:38:57.847648697 +0000 UTC m=+0.741207255,LastTimestamp:2026-01-21 23:38:57.847648697 +0000 UTC m=+0.741207255,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515.1.0-n-a0ba06055b,}" Jan 21 23:38:57.851650 kubelet[3146]: I0121 23:38:57.851415 3146 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 23:38:57.851964 kubelet[3146]: I0121 23:38:57.851938 3146 server.go:317] "Adding debug handlers to kubelet server" Jan 21 23:38:57.857795 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 21 23:38:57.857863 kernel: audit: type=1325 audit(1769038737.853:350): table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:57.853000 audit[3162]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:57.857943 kubelet[3146]: I0121 23:38:57.854510 3146 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 23:38:57.857943 kubelet[3146]: I0121 23:38:57.854680 3146 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 23:38:57.857943 kubelet[3146]: I0121 23:38:57.854805 3146 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 21 23:38:57.857943 kubelet[3146]: E0121 23:38:57.856076 3146 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-n-a0ba06055b\" not found" Jan 21 23:38:57.857943 kubelet[3146]: I0121 23:38:57.856100 3146 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 21 23:38:57.857943 kubelet[3146]: I0121 23:38:57.856231 3146 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 21 23:38:57.857943 kubelet[3146]: I0121 23:38:57.856275 3146 reconciler.go:26] "Reconciler: start to sync state" Jan 21 23:38:57.857943 kubelet[3146]: E0121 23:38:57.856509 3146 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 23:38:57.857943 kubelet[3146]: E0121 23:38:57.856676 3146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-n-a0ba06055b?timeout=10s\": dial tcp 10.200.20.29:6443: connect: connection refused" interval="200ms" Jan 21 23:38:57.860444 kubelet[3146]: I0121 23:38:57.859932 3146 factory.go:223] Registration of the systemd container factory successfully Jan 21 23:38:57.860444 kubelet[3146]: I0121 23:38:57.859997 3146 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 21 23:38:57.862372 kubelet[3146]: E0121 23:38:57.862357 3146 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 21 23:38:57.862969 kubelet[3146]: I0121 23:38:57.862947 3146 factory.go:223] Registration of the containerd container factory successfully Jan 21 23:38:57.853000 audit[3162]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe37a6be0 a2=0 a3=0 items=0 ppid=3146 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:57.883974 kernel: audit: type=1300 audit(1769038737.853:350): arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe37a6be0 a2=0 a3=0 items=0 ppid=3146 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:57.853000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 21 23:38:57.893412 kernel: audit: type=1327 audit(1769038737.853:350): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 21 23:38:57.872000 audit[3163]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3163 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:57.902650 kernel: audit: type=1325 audit(1769038737.872:351): table=filter:46 family=2 entries=1 op=nft_register_chain pid=3163 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:57.872000 audit[3163]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffda4ace90 a2=0 a3=0 items=0 ppid=3146 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:57.903753 kubelet[3146]: I0121 23:38:57.903719 3146 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 21 23:38:57.903753 kubelet[3146]: I0121 23:38:57.903734 3146 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 21 23:38:57.903943 kubelet[3146]: I0121 23:38:57.903870 3146 state_mem.go:36] "Initialized new in-memory state store" Jan 21 23:38:57.919780 kernel: audit: type=1300 audit(1769038737.872:351): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffda4ace90 a2=0 a3=0 items=0 ppid=3146 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:57.872000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 21 23:38:57.929142 kernel: audit: type=1327 audit(1769038737.872:351): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 21 23:38:57.893000 audit[3165]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3165 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:57.938031 kernel: audit: type=1325 audit(1769038737.893:352): table=filter:47 family=2 entries=2 op=nft_register_chain pid=3165 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:57.893000 audit[3165]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc8be6430 a2=0 a3=0 items=0 ppid=3146 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:57.939176 kubelet[3146]: I0121 23:38:57.938813 3146 policy_none.go:49] "None policy: Start" Jan 21 23:38:57.939176 kubelet[3146]: I0121 23:38:57.938836 3146 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 21 23:38:57.939176 kubelet[3146]: I0121 23:38:57.938847 3146 state_mem.go:35] "Initializing new in-memory state store" Jan 21 23:38:57.955403 kernel: audit: type=1300 audit(1769038737.893:352): arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc8be6430 a2=0 a3=0 items=0 ppid=3146 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:57.956413 kubelet[3146]: E0121 23:38:57.956386 3146 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-n-a0ba06055b\" not found" Jan 21 23:38:57.893000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 23:38:57.965507 kernel: audit: type=1327 audit(1769038737.893:352): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 23:38:57.899000 audit[3168]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3168 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:57.969240 kubelet[3146]: I0121 23:38:57.969197 3146 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 21 23:38:57.973608 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 21 23:38:57.977401 kernel: audit: type=1325 audit(1769038737.899:353): table=filter:48 family=2 entries=2 op=nft_register_chain pid=3168 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:57.899000 audit[3168]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc3652660 a2=0 a3=0 items=0 ppid=3146 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:57.899000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 23:38:57.967000 audit[3172]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3172 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:57.967000 audit[3172]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff91323a0 a2=0 a3=0 items=0 ppid=3146 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:57.967000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 21 23:38:57.973000 audit[3175]: NETFILTER_CFG table=mangle:50 family=2 entries=1 op=nft_register_chain pid=3175 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:57.973000 audit[3175]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe9bc5120 a2=0 a3=0 items=0 ppid=3146 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:57.973000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 21 23:38:57.977000 audit[3176]: NETFILTER_CFG table=nat:51 family=2 entries=1 op=nft_register_chain pid=3176 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:57.977000 audit[3176]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc89cf90 a2=0 a3=0 items=0 ppid=3146 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:57.977000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 21 23:38:57.977000 audit[3174]: NETFILTER_CFG table=mangle:52 family=10 entries=2 op=nft_register_chain pid=3174 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:38:57.977000 audit[3174]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffeb5784d0 a2=0 a3=0 items=0 ppid=3146 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:57.977000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 21 23:38:57.979487 kubelet[3146]: I0121 23:38:57.978931 3146 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 21 23:38:57.979487 kubelet[3146]: I0121 23:38:57.978950 3146 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 21 23:38:57.979487 kubelet[3146]: I0121 23:38:57.978969 3146 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 21 23:38:57.979487 kubelet[3146]: I0121 23:38:57.978973 3146 kubelet.go:2436] "Starting kubelet main sync loop" Jan 21 23:38:57.979487 kubelet[3146]: E0121 23:38:57.979004 3146 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 23:38:57.980031 kubelet[3146]: E0121 23:38:57.979992 3146 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 23:38:57.979000 audit[3178]: NETFILTER_CFG table=mangle:53 family=10 entries=1 op=nft_register_chain pid=3178 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:38:57.979000 audit[3178]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd42ea810 a2=0 a3=0 items=0 ppid=3146 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:57.979000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 21 23:38:57.980000 audit[3177]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3177 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:38:57.980000 audit[3177]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc97628b0 a2=0 a3=0 items=0 ppid=3146 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:57.981000 audit[3181]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3181 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:38:57.981000 audit[3181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdd0903b0 a2=0 a3=0 items=0 ppid=3146 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:57.981000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 21 23:38:57.980000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 21 23:38:57.982000 audit[3182]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3182 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:38:57.982000 audit[3182]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffff2f4ea0 a2=0 a3=0 items=0 ppid=3146 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:57.982000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 21 23:38:57.984820 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 21 23:38:57.987747 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 21 23:38:57.996891 kubelet[3146]: E0121 23:38:57.996691 3146 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 21 23:38:57.996891 kubelet[3146]: I0121 23:38:57.996867 3146 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 23:38:57.997056 kubelet[3146]: I0121 23:38:57.997012 3146 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 23:38:57.997319 kubelet[3146]: I0121 23:38:57.997301 3146 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 23:38:57.998313 kubelet[3146]: E0121 23:38:57.998297 3146 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 21 23:38:57.998444 kubelet[3146]: E0121 23:38:57.998433 3146 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515.1.0-n-a0ba06055b\" not found" Jan 21 23:38:58.058071 kubelet[3146]: E0121 23:38:58.057161 3146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-n-a0ba06055b?timeout=10s\": dial tcp 10.200.20.29:6443: connect: connection refused" interval="400ms" Jan 21 23:38:58.090814 systemd[1]: Created slice kubepods-burstable-pod28780b5af20e6ebc4455e0047f95eeab.slice - libcontainer container kubepods-burstable-pod28780b5af20e6ebc4455e0047f95eeab.slice. Jan 21 23:38:58.100436 kubelet[3146]: I0121 23:38:58.100354 3146 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-n-a0ba06055b" Jan 21 23:38:58.100899 kubelet[3146]: E0121 23:38:58.100877 3146 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-a0ba06055b\" not found" node="ci-4515.1.0-n-a0ba06055b" Jan 21 23:38:58.101307 kubelet[3146]: E0121 23:38:58.101288 3146 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.29:6443/api/v1/nodes\": dial tcp 10.200.20.29:6443: connect: connection refused" node="ci-4515.1.0-n-a0ba06055b" Jan 21 23:38:58.104820 systemd[1]: Created slice kubepods-burstable-pod2dc336c01ecde0ca770093d7f9e06fd1.slice - libcontainer container kubepods-burstable-pod2dc336c01ecde0ca770093d7f9e06fd1.slice. Jan 21 23:38:58.115075 kubelet[3146]: E0121 23:38:58.115056 3146 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-a0ba06055b\" not found" node="ci-4515.1.0-n-a0ba06055b" Jan 21 23:38:58.117473 systemd[1]: Created slice kubepods-burstable-pod7a375bdb5c5225ec7a7f95eef18be5b4.slice - libcontainer container kubepods-burstable-pod7a375bdb5c5225ec7a7f95eef18be5b4.slice. Jan 21 23:38:58.119316 kubelet[3146]: E0121 23:38:58.119179 3146 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-a0ba06055b\" not found" node="ci-4515.1.0-n-a0ba06055b" Jan 21 23:38:58.158664 kubelet[3146]: I0121 23:38:58.158594 3146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2dc336c01ecde0ca770093d7f9e06fd1-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-n-a0ba06055b\" (UID: \"2dc336c01ecde0ca770093d7f9e06fd1\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-a0ba06055b" Jan 21 23:38:58.158872 kubelet[3146]: I0121 23:38:58.158815 3146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2dc336c01ecde0ca770093d7f9e06fd1-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-n-a0ba06055b\" (UID: \"2dc336c01ecde0ca770093d7f9e06fd1\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-a0ba06055b" Jan 21 23:38:58.158872 kubelet[3146]: I0121 23:38:58.158837 3146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2dc336c01ecde0ca770093d7f9e06fd1-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-n-a0ba06055b\" (UID: \"2dc336c01ecde0ca770093d7f9e06fd1\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-a0ba06055b" Jan 21 23:38:58.158872 kubelet[3146]: I0121 23:38:58.158849 3146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a375bdb5c5225ec7a7f95eef18be5b4-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-n-a0ba06055b\" (UID: \"7a375bdb5c5225ec7a7f95eef18be5b4\") " pod="kube-system/kube-scheduler-ci-4515.1.0-n-a0ba06055b" Jan 21 23:38:58.159069 kubelet[3146]: I0121 23:38:58.158859 3146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/28780b5af20e6ebc4455e0047f95eeab-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-n-a0ba06055b\" (UID: \"28780b5af20e6ebc4455e0047f95eeab\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-a0ba06055b" Jan 21 23:38:58.159069 kubelet[3146]: I0121 23:38:58.159005 3146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28780b5af20e6ebc4455e0047f95eeab-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-n-a0ba06055b\" (UID: \"28780b5af20e6ebc4455e0047f95eeab\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-a0ba06055b" Jan 21 23:38:58.159069 kubelet[3146]: I0121 23:38:58.159017 3146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2dc336c01ecde0ca770093d7f9e06fd1-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-n-a0ba06055b\" (UID: \"2dc336c01ecde0ca770093d7f9e06fd1\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-a0ba06055b" Jan 21 23:38:58.159237 kubelet[3146]: I0121 23:38:58.159030 3146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2dc336c01ecde0ca770093d7f9e06fd1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-n-a0ba06055b\" (UID: \"2dc336c01ecde0ca770093d7f9e06fd1\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-a0ba06055b" Jan 21 23:38:58.159237 kubelet[3146]: I0121 23:38:58.159207 3146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28780b5af20e6ebc4455e0047f95eeab-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-n-a0ba06055b\" (UID: \"28780b5af20e6ebc4455e0047f95eeab\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-a0ba06055b" Jan 21 23:38:58.303846 kubelet[3146]: I0121 23:38:58.303506 3146 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-n-a0ba06055b" Jan 21 23:38:58.303846 kubelet[3146]: E0121 23:38:58.303805 3146 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.29:6443/api/v1/nodes\": dial tcp 10.200.20.29:6443: connect: connection refused" node="ci-4515.1.0-n-a0ba06055b" Jan 21 23:38:58.402541 containerd[2113]: time="2026-01-21T23:38:58.402433537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-n-a0ba06055b,Uid:28780b5af20e6ebc4455e0047f95eeab,Namespace:kube-system,Attempt:0,}" Jan 21 23:38:58.415951 containerd[2113]: time="2026-01-21T23:38:58.415912709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-n-a0ba06055b,Uid:2dc336c01ecde0ca770093d7f9e06fd1,Namespace:kube-system,Attempt:0,}" Jan 21 23:38:58.420790 containerd[2113]: time="2026-01-21T23:38:58.420707327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-n-a0ba06055b,Uid:7a375bdb5c5225ec7a7f95eef18be5b4,Namespace:kube-system,Attempt:0,}" Jan 21 23:38:58.448081 containerd[2113]: time="2026-01-21T23:38:58.447949359Z" level=info msg="connecting to shim 90e6fa4d40961a0e0ff19d52e3becbc098a65d800da4a5d546d89592e9e7c6f2" address="unix:///run/containerd/s/2fae5a883598c9d2e8d58564de0e54f46cd15e893bab9bccedebe208877241f6" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:38:58.458843 kubelet[3146]: E0121 23:38:58.458807 3146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-n-a0ba06055b?timeout=10s\": dial tcp 10.200.20.29:6443: connect: connection refused" interval="800ms" Jan 21 23:38:58.469241 systemd[1]: Started cri-containerd-90e6fa4d40961a0e0ff19d52e3becbc098a65d800da4a5d546d89592e9e7c6f2.scope - libcontainer container 90e6fa4d40961a0e0ff19d52e3becbc098a65d800da4a5d546d89592e9e7c6f2. Jan 21 23:38:58.477000 audit: BPF prog-id=107 op=LOAD Jan 21 23:38:58.478000 audit: BPF prog-id=108 op=LOAD Jan 21 23:38:58.478000 audit[3203]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3191 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653666613464343039363161306530666631396435326533626563 Jan 21 23:38:58.478000 audit: BPF prog-id=108 op=UNLOAD Jan 21 23:38:58.478000 audit[3203]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3191 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653666613464343039363161306530666631396435326533626563 Jan 21 23:38:58.478000 audit: BPF prog-id=109 op=LOAD Jan 21 23:38:58.478000 audit[3203]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3191 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653666613464343039363161306530666631396435326533626563 Jan 21 23:38:58.478000 audit: BPF prog-id=110 op=LOAD Jan 21 23:38:58.478000 audit[3203]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3191 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653666613464343039363161306530666631396435326533626563 Jan 21 23:38:58.478000 audit: BPF prog-id=110 op=UNLOAD Jan 21 23:38:58.478000 audit[3203]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3191 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653666613464343039363161306530666631396435326533626563 Jan 21 23:38:58.478000 audit: BPF prog-id=109 op=UNLOAD Jan 21 23:38:58.478000 audit[3203]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3191 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653666613464343039363161306530666631396435326533626563 Jan 21 23:38:58.478000 audit: BPF prog-id=111 op=LOAD Jan 21 23:38:58.478000 audit[3203]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3191 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653666613464343039363161306530666631396435326533626563 Jan 21 23:38:58.503699 containerd[2113]: time="2026-01-21T23:38:58.503651453Z" level=info msg="connecting to shim 8ad4550b262c7a992ab4e5ca20009e49c7489c261a96fa0adca1ff09b69f66e5" address="unix:///run/containerd/s/ac428e47b93c36c37c49ec8ea24781bee15a56d8d19b9b06d00d3b9efa6ac116" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:38:58.510311 containerd[2113]: time="2026-01-21T23:38:58.510228246Z" level=info msg="connecting to shim 0880d2bca993747ca7a4f96a62aa24bc1835f3c0306cd58cb341504f6f3f2009" address="unix:///run/containerd/s/cf4892ca5f1440369a36818205099dea9acc27600f505540b5b81bfc492b90fd" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:38:58.516693 containerd[2113]: time="2026-01-21T23:38:58.516475861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-n-a0ba06055b,Uid:28780b5af20e6ebc4455e0047f95eeab,Namespace:kube-system,Attempt:0,} returns sandbox id \"90e6fa4d40961a0e0ff19d52e3becbc098a65d800da4a5d546d89592e9e7c6f2\"" Jan 21 23:38:58.525686 containerd[2113]: time="2026-01-21T23:38:58.525647957Z" level=info msg="CreateContainer within sandbox \"90e6fa4d40961a0e0ff19d52e3becbc098a65d800da4a5d546d89592e9e7c6f2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 21 23:38:58.535226 systemd[1]: Started cri-containerd-0880d2bca993747ca7a4f96a62aa24bc1835f3c0306cd58cb341504f6f3f2009.scope - libcontainer container 0880d2bca993747ca7a4f96a62aa24bc1835f3c0306cd58cb341504f6f3f2009. Jan 21 23:38:58.536265 systemd[1]: Started cri-containerd-8ad4550b262c7a992ab4e5ca20009e49c7489c261a96fa0adca1ff09b69f66e5.scope - libcontainer container 8ad4550b262c7a992ab4e5ca20009e49c7489c261a96fa0adca1ff09b69f66e5. Jan 21 23:38:58.555000 audit: BPF prog-id=112 op=LOAD Jan 21 23:38:58.557067 containerd[2113]: time="2026-01-21T23:38:58.556714866Z" level=info msg="Container d4a70d6754d5873539ec1c9cb7faab67d0d183a7e35dc8005dc3b817db2ddf2a: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:38:58.556000 audit: BPF prog-id=113 op=LOAD Jan 21 23:38:58.556000 audit[3269]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3240 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861643435353062323632633761393932616234653563613230303039 Jan 21 23:38:58.556000 audit: BPF prog-id=113 op=UNLOAD Jan 21 23:38:58.556000 audit[3269]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3240 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861643435353062323632633761393932616234653563613230303039 Jan 21 23:38:58.556000 audit: BPF prog-id=114 op=LOAD Jan 21 23:38:58.557000 audit: BPF prog-id=115 op=LOAD Jan 21 23:38:58.557000 audit: BPF prog-id=116 op=LOAD Jan 21 23:38:58.557000 audit[3271]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3252 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038383064326263613939333734376361376134663936613632616132 Jan 21 23:38:58.557000 audit: BPF prog-id=116 op=UNLOAD Jan 21 23:38:58.557000 audit[3271]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3252 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038383064326263613939333734376361376134663936613632616132 Jan 21 23:38:58.557000 audit[3269]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3240 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861643435353062323632633761393932616234653563613230303039 Jan 21 23:38:58.557000 audit: BPF prog-id=117 op=LOAD Jan 21 23:38:58.557000 audit[3269]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3240 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861643435353062323632633761393932616234653563613230303039 Jan 21 23:38:58.557000 audit: BPF prog-id=117 op=UNLOAD Jan 21 23:38:58.557000 audit[3269]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3240 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861643435353062323632633761393932616234653563613230303039 Jan 21 23:38:58.557000 audit: BPF prog-id=115 op=UNLOAD Jan 21 23:38:58.557000 audit[3269]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3240 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861643435353062323632633761393932616234653563613230303039 Jan 21 23:38:58.557000 audit: BPF prog-id=118 op=LOAD Jan 21 23:38:58.557000 audit[3271]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3252 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038383064326263613939333734376361376134663936613632616132 Jan 21 23:38:58.557000 audit: BPF prog-id=119 op=LOAD Jan 21 23:38:58.557000 audit[3271]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3252 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038383064326263613939333734376361376134663936613632616132 Jan 21 23:38:58.557000 audit: BPF prog-id=119 op=UNLOAD Jan 21 23:38:58.557000 audit[3271]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3252 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038383064326263613939333734376361376134663936613632616132 Jan 21 23:38:58.557000 audit: BPF prog-id=118 op=UNLOAD Jan 21 23:38:58.557000 audit[3271]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3252 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038383064326263613939333734376361376134663936613632616132 Jan 21 23:38:58.557000 audit: BPF prog-id=120 op=LOAD Jan 21 23:38:58.557000 audit[3271]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3252 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038383064326263613939333734376361376134663936613632616132 Jan 21 23:38:58.557000 audit: BPF prog-id=121 op=LOAD Jan 21 23:38:58.557000 audit[3269]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3240 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861643435353062323632633761393932616234653563613230303039 Jan 21 23:38:58.579815 containerd[2113]: time="2026-01-21T23:38:58.579767394Z" level=info msg="CreateContainer within sandbox \"90e6fa4d40961a0e0ff19d52e3becbc098a65d800da4a5d546d89592e9e7c6f2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d4a70d6754d5873539ec1c9cb7faab67d0d183a7e35dc8005dc3b817db2ddf2a\"" Jan 21 23:38:58.580514 containerd[2113]: time="2026-01-21T23:38:58.580478936Z" level=info msg="StartContainer for \"d4a70d6754d5873539ec1c9cb7faab67d0d183a7e35dc8005dc3b817db2ddf2a\"" Jan 21 23:38:58.581627 containerd[2113]: time="2026-01-21T23:38:58.581600410Z" level=info msg="connecting to shim d4a70d6754d5873539ec1c9cb7faab67d0d183a7e35dc8005dc3b817db2ddf2a" address="unix:///run/containerd/s/2fae5a883598c9d2e8d58564de0e54f46cd15e893bab9bccedebe208877241f6" protocol=ttrpc version=3 Jan 21 23:38:58.586402 containerd[2113]: time="2026-01-21T23:38:58.586329715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-n-a0ba06055b,Uid:7a375bdb5c5225ec7a7f95eef18be5b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"0880d2bca993747ca7a4f96a62aa24bc1835f3c0306cd58cb341504f6f3f2009\"" Jan 21 23:38:58.592999 containerd[2113]: time="2026-01-21T23:38:58.592970902Z" level=info msg="CreateContainer within sandbox \"0880d2bca993747ca7a4f96a62aa24bc1835f3c0306cd58cb341504f6f3f2009\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 21 23:38:58.596963 containerd[2113]: time="2026-01-21T23:38:58.596867933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-n-a0ba06055b,Uid:2dc336c01ecde0ca770093d7f9e06fd1,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ad4550b262c7a992ab4e5ca20009e49c7489c261a96fa0adca1ff09b69f66e5\"" Jan 21 23:38:58.600222 systemd[1]: Started cri-containerd-d4a70d6754d5873539ec1c9cb7faab67d0d183a7e35dc8005dc3b817db2ddf2a.scope - libcontainer container d4a70d6754d5873539ec1c9cb7faab67d0d183a7e35dc8005dc3b817db2ddf2a. Jan 21 23:38:58.604390 containerd[2113]: time="2026-01-21T23:38:58.604357482Z" level=info msg="CreateContainer within sandbox \"8ad4550b262c7a992ab4e5ca20009e49c7489c261a96fa0adca1ff09b69f66e5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 21 23:38:58.609000 audit: BPF prog-id=122 op=LOAD Jan 21 23:38:58.610000 audit: BPF prog-id=123 op=LOAD Jan 21 23:38:58.610000 audit[3314]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3191 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434613730643637353464353837333533396563316339636237666161 Jan 21 23:38:58.610000 audit: BPF prog-id=123 op=UNLOAD Jan 21 23:38:58.610000 audit[3314]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3191 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434613730643637353464353837333533396563316339636237666161 Jan 21 23:38:58.610000 audit: BPF prog-id=124 op=LOAD Jan 21 23:38:58.610000 audit[3314]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3191 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434613730643637353464353837333533396563316339636237666161 Jan 21 23:38:58.610000 audit: BPF prog-id=125 op=LOAD Jan 21 23:38:58.610000 audit[3314]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3191 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434613730643637353464353837333533396563316339636237666161 Jan 21 23:38:58.610000 audit: BPF prog-id=125 op=UNLOAD Jan 21 23:38:58.610000 audit[3314]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3191 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434613730643637353464353837333533396563316339636237666161 Jan 21 23:38:58.610000 audit: BPF prog-id=124 op=UNLOAD Jan 21 23:38:58.610000 audit[3314]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3191 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434613730643637353464353837333533396563316339636237666161 Jan 21 23:38:58.610000 audit: BPF prog-id=126 op=LOAD Jan 21 23:38:58.610000 audit[3314]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3191 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434613730643637353464353837333533396563316339636237666161 Jan 21 23:38:58.618820 containerd[2113]: time="2026-01-21T23:38:58.618646654Z" level=info msg="Container 5ee5117538f361c45360a9102d22329fdeb98068b07228e15aecab1a4dd024bd: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:38:58.625611 containerd[2113]: time="2026-01-21T23:38:58.625576706Z" level=info msg="Container c31f2d42cac3d7f41c265e4250ea594fd42709dda3b70eb80076689723782686: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:38:58.638196 containerd[2113]: time="2026-01-21T23:38:58.637820384Z" level=info msg="CreateContainer within sandbox \"0880d2bca993747ca7a4f96a62aa24bc1835f3c0306cd58cb341504f6f3f2009\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5ee5117538f361c45360a9102d22329fdeb98068b07228e15aecab1a4dd024bd\"" Jan 21 23:38:58.638398 containerd[2113]: time="2026-01-21T23:38:58.637997845Z" level=info msg="StartContainer for \"d4a70d6754d5873539ec1c9cb7faab67d0d183a7e35dc8005dc3b817db2ddf2a\" returns successfully" Jan 21 23:38:58.638971 containerd[2113]: time="2026-01-21T23:38:58.638746804Z" level=info msg="StartContainer for \"5ee5117538f361c45360a9102d22329fdeb98068b07228e15aecab1a4dd024bd\"" Jan 21 23:38:58.642392 containerd[2113]: time="2026-01-21T23:38:58.642371459Z" level=info msg="connecting to shim 5ee5117538f361c45360a9102d22329fdeb98068b07228e15aecab1a4dd024bd" address="unix:///run/containerd/s/cf4892ca5f1440369a36818205099dea9acc27600f505540b5b81bfc492b90fd" protocol=ttrpc version=3 Jan 21 23:38:58.646069 containerd[2113]: time="2026-01-21T23:38:58.645314533Z" level=info msg="CreateContainer within sandbox \"8ad4550b262c7a992ab4e5ca20009e49c7489c261a96fa0adca1ff09b69f66e5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c31f2d42cac3d7f41c265e4250ea594fd42709dda3b70eb80076689723782686\"" Jan 21 23:38:58.646561 containerd[2113]: time="2026-01-21T23:38:58.646503065Z" level=info msg="StartContainer for \"c31f2d42cac3d7f41c265e4250ea594fd42709dda3b70eb80076689723782686\"" Jan 21 23:38:58.649070 containerd[2113]: time="2026-01-21T23:38:58.648868146Z" level=info msg="connecting to shim c31f2d42cac3d7f41c265e4250ea594fd42709dda3b70eb80076689723782686" address="unix:///run/containerd/s/ac428e47b93c36c37c49ec8ea24781bee15a56d8d19b9b06d00d3b9efa6ac116" protocol=ttrpc version=3 Jan 21 23:38:58.663227 systemd[1]: Started cri-containerd-5ee5117538f361c45360a9102d22329fdeb98068b07228e15aecab1a4dd024bd.scope - libcontainer container 5ee5117538f361c45360a9102d22329fdeb98068b07228e15aecab1a4dd024bd. Jan 21 23:38:58.674228 systemd[1]: Started cri-containerd-c31f2d42cac3d7f41c265e4250ea594fd42709dda3b70eb80076689723782686.scope - libcontainer container c31f2d42cac3d7f41c265e4250ea594fd42709dda3b70eb80076689723782686. Jan 21 23:38:58.679000 audit: BPF prog-id=127 op=LOAD Jan 21 23:38:58.679000 audit: BPF prog-id=128 op=LOAD Jan 21 23:38:58.679000 audit[3352]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3252 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565653531313735333866333631633435333630613931303264323233 Jan 21 23:38:58.679000 audit: BPF prog-id=128 op=UNLOAD Jan 21 23:38:58.679000 audit[3352]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3252 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565653531313735333866333631633435333630613931303264323233 Jan 21 23:38:58.679000 audit: BPF prog-id=129 op=LOAD Jan 21 23:38:58.679000 audit[3352]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3252 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565653531313735333866333631633435333630613931303264323233 Jan 21 23:38:58.679000 audit: BPF prog-id=130 op=LOAD Jan 21 23:38:58.679000 audit[3352]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3252 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565653531313735333866333631633435333630613931303264323233 Jan 21 23:38:58.679000 audit: BPF prog-id=130 op=UNLOAD Jan 21 23:38:58.679000 audit[3352]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3252 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565653531313735333866333631633435333630613931303264323233 Jan 21 23:38:58.679000 audit: BPF prog-id=129 op=UNLOAD Jan 21 23:38:58.679000 audit[3352]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3252 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565653531313735333866333631633435333630613931303264323233 Jan 21 23:38:58.679000 audit: BPF prog-id=131 op=LOAD Jan 21 23:38:58.679000 audit[3352]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3252 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565653531313735333866333631633435333630613931303264323233 Jan 21 23:38:58.692000 audit: BPF prog-id=132 op=LOAD Jan 21 23:38:58.692000 audit: BPF prog-id=133 op=LOAD Jan 21 23:38:58.692000 audit[3363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3240 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333316632643432636163336437663431633236356534323530656135 Jan 21 23:38:58.692000 audit: BPF prog-id=133 op=UNLOAD Jan 21 23:38:58.692000 audit[3363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3240 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333316632643432636163336437663431633236356534323530656135 Jan 21 23:38:58.692000 audit: BPF prog-id=134 op=LOAD Jan 21 23:38:58.692000 audit[3363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3240 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333316632643432636163336437663431633236356534323530656135 Jan 21 23:38:58.692000 audit: BPF prog-id=135 op=LOAD Jan 21 23:38:58.692000 audit[3363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3240 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333316632643432636163336437663431633236356534323530656135 Jan 21 23:38:58.692000 audit: BPF prog-id=135 op=UNLOAD Jan 21 23:38:58.692000 audit[3363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3240 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333316632643432636163336437663431633236356534323530656135 Jan 21 23:38:58.692000 audit: BPF prog-id=134 op=UNLOAD Jan 21 23:38:58.692000 audit[3363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3240 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333316632643432636163336437663431633236356534323530656135 Jan 21 23:38:58.692000 audit: BPF prog-id=136 op=LOAD Jan 21 23:38:58.692000 audit[3363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3240 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:58.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333316632643432636163336437663431633236356534323530656135 Jan 21 23:38:58.706757 kubelet[3146]: I0121 23:38:58.706653 3146 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-n-a0ba06055b" Jan 21 23:38:58.727800 containerd[2113]: time="2026-01-21T23:38:58.727765956Z" level=info msg="StartContainer for \"5ee5117538f361c45360a9102d22329fdeb98068b07228e15aecab1a4dd024bd\" returns successfully" Jan 21 23:38:58.737272 containerd[2113]: time="2026-01-21T23:38:58.737237894Z" level=info msg="StartContainer for \"c31f2d42cac3d7f41c265e4250ea594fd42709dda3b70eb80076689723782686\" returns successfully" Jan 21 23:38:58.989982 kubelet[3146]: E0121 23:38:58.989956 3146 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-a0ba06055b\" not found" node="ci-4515.1.0-n-a0ba06055b" Jan 21 23:38:58.991647 kubelet[3146]: E0121 23:38:58.991621 3146 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-a0ba06055b\" not found" node="ci-4515.1.0-n-a0ba06055b" Jan 21 23:38:58.995207 kubelet[3146]: E0121 23:38:58.995189 3146 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-a0ba06055b\" not found" node="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:00.015649 kubelet[3146]: E0121 23:39:00.015432 3146 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-a0ba06055b\" not found" node="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:00.015649 kubelet[3146]: E0121 23:39:00.015539 3146 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-a0ba06055b\" not found" node="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:00.066424 kubelet[3146]: E0121 23:39:00.066372 3146 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4515.1.0-n-a0ba06055b\" not found" node="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:00.110942 kubelet[3146]: I0121 23:39:00.110901 3146 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:00.110942 kubelet[3146]: E0121 23:39:00.110941 3146 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515.1.0-n-a0ba06055b\": node \"ci-4515.1.0-n-a0ba06055b\" not found" Jan 21 23:39:00.157323 kubelet[3146]: I0121 23:39:00.157286 3146 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:00.219014 kubelet[3146]: E0121 23:39:00.218405 3146 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-n-a0ba06055b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:00.219257 kubelet[3146]: I0121 23:39:00.219077 3146 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:00.221478 kubelet[3146]: E0121 23:39:00.221389 3146 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-n-a0ba06055b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:00.221478 kubelet[3146]: I0121 23:39:00.221413 3146 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:00.224678 kubelet[3146]: E0121 23:39:00.224622 3146 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-n-a0ba06055b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:00.846443 kubelet[3146]: I0121 23:39:00.846227 3146 apiserver.go:52] "Watching apiserver" Jan 21 23:39:00.856597 kubelet[3146]: I0121 23:39:00.856573 3146 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 21 23:39:01.006770 kubelet[3146]: I0121 23:39:01.006556 3146 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:01.007007 kubelet[3146]: I0121 23:39:01.006995 3146 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:01.015446 kubelet[3146]: I0121 23:39:01.015413 3146 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 21 23:39:01.016243 kubelet[3146]: I0121 23:39:01.016227 3146 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 21 23:39:02.145491 systemd[1]: Reload requested from client PID 3431 ('systemctl') (unit session-9.scope)... Jan 21 23:39:02.145510 systemd[1]: Reloading... Jan 21 23:39:02.206077 zram_generator::config[3478]: No configuration found. Jan 21 23:39:02.391103 systemd[1]: Reloading finished in 245 ms. Jan 21 23:39:02.412950 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 23:39:02.429499 systemd[1]: kubelet.service: Deactivated successfully. Jan 21 23:39:02.429856 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 23:39:02.430010 systemd[1]: kubelet.service: Consumed 897ms CPU time, 126M memory peak. Jan 21 23:39:02.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:39:02.432919 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 23:39:02.433000 audit: BPF prog-id=137 op=LOAD Jan 21 23:39:02.433000 audit: BPF prog-id=101 op=UNLOAD Jan 21 23:39:02.433000 audit: BPF prog-id=138 op=LOAD Jan 21 23:39:02.433000 audit: BPF prog-id=139 op=LOAD Jan 21 23:39:02.433000 audit: BPF prog-id=102 op=UNLOAD Jan 21 23:39:02.433000 audit: BPF prog-id=103 op=UNLOAD Jan 21 23:39:02.434000 audit: BPF prog-id=140 op=LOAD Jan 21 23:39:02.441000 audit: BPF prog-id=88 op=UNLOAD Jan 21 23:39:02.441000 audit: BPF prog-id=141 op=LOAD Jan 21 23:39:02.441000 audit: BPF prog-id=142 op=LOAD Jan 21 23:39:02.441000 audit: BPF prog-id=89 op=UNLOAD Jan 21 23:39:02.441000 audit: BPF prog-id=90 op=UNLOAD Jan 21 23:39:02.441000 audit: BPF prog-id=143 op=LOAD Jan 21 23:39:02.441000 audit: BPF prog-id=104 op=UNLOAD Jan 21 23:39:02.441000 audit: BPF prog-id=144 op=LOAD Jan 21 23:39:02.441000 audit: BPF prog-id=145 op=LOAD Jan 21 23:39:02.441000 audit: BPF prog-id=105 op=UNLOAD Jan 21 23:39:02.441000 audit: BPF prog-id=106 op=UNLOAD Jan 21 23:39:02.442000 audit: BPF prog-id=146 op=LOAD Jan 21 23:39:02.442000 audit: BPF prog-id=147 op=LOAD Jan 21 23:39:02.442000 audit: BPF prog-id=91 op=UNLOAD Jan 21 23:39:02.442000 audit: BPF prog-id=92 op=UNLOAD Jan 21 23:39:02.442000 audit: BPF prog-id=148 op=LOAD Jan 21 23:39:02.442000 audit: BPF prog-id=93 op=UNLOAD Jan 21 23:39:02.443000 audit: BPF prog-id=149 op=LOAD Jan 21 23:39:02.443000 audit: BPF prog-id=98 op=UNLOAD Jan 21 23:39:02.443000 audit: BPF prog-id=150 op=LOAD Jan 21 23:39:02.443000 audit: BPF prog-id=151 op=LOAD Jan 21 23:39:02.443000 audit: BPF prog-id=99 op=UNLOAD Jan 21 23:39:02.443000 audit: BPF prog-id=100 op=UNLOAD Jan 21 23:39:02.443000 audit: BPF prog-id=152 op=LOAD Jan 21 23:39:02.443000 audit: BPF prog-id=87 op=UNLOAD Jan 21 23:39:02.444000 audit: BPF prog-id=153 op=LOAD Jan 21 23:39:02.444000 audit: BPF prog-id=94 op=UNLOAD Jan 21 23:39:02.444000 audit: BPF prog-id=154 op=LOAD Jan 21 23:39:02.444000 audit: BPF prog-id=155 op=LOAD Jan 21 23:39:02.444000 audit: BPF prog-id=95 op=UNLOAD Jan 21 23:39:02.444000 audit: BPF prog-id=96 op=UNLOAD Jan 21 23:39:02.444000 audit: BPF prog-id=156 op=LOAD Jan 21 23:39:02.444000 audit: BPF prog-id=97 op=UNLOAD Jan 21 23:39:02.539290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 23:39:02.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:39:02.547367 (kubelet)[3545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 21 23:39:02.624392 kubelet[3545]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 23:39:02.624392 kubelet[3545]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 21 23:39:02.624392 kubelet[3545]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 23:39:02.624745 kubelet[3545]: I0121 23:39:02.624569 3545 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 23:39:02.630689 kubelet[3545]: I0121 23:39:02.630657 3545 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 21 23:39:02.630689 kubelet[3545]: I0121 23:39:02.630685 3545 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 23:39:02.630889 kubelet[3545]: I0121 23:39:02.630871 3545 server.go:956] "Client rotation is on, will bootstrap in background" Jan 21 23:39:02.631787 kubelet[3545]: I0121 23:39:02.631770 3545 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 21 23:39:02.633609 kubelet[3545]: I0121 23:39:02.633376 3545 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 21 23:39:02.640884 kubelet[3545]: I0121 23:39:02.640870 3545 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 23:39:02.645202 kubelet[3545]: I0121 23:39:02.645182 3545 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 21 23:39:02.645358 kubelet[3545]: I0121 23:39:02.645334 3545 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 23:39:02.645468 kubelet[3545]: I0121 23:39:02.645353 3545 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-n-a0ba06055b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 23:39:02.645468 kubelet[3545]: I0121 23:39:02.645466 3545 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 23:39:02.645581 kubelet[3545]: I0121 23:39:02.645473 3545 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 23:39:02.645581 kubelet[3545]: I0121 23:39:02.645507 3545 state_mem.go:36] "Initialized new in-memory state store" Jan 21 23:39:02.645711 kubelet[3545]: I0121 23:39:02.645615 3545 kubelet.go:480] "Attempting to sync node with API server" Jan 21 23:39:02.645711 kubelet[3545]: I0121 23:39:02.645624 3545 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 23:39:02.645711 kubelet[3545]: I0121 23:39:02.645642 3545 kubelet.go:386] "Adding apiserver pod source" Jan 21 23:39:02.645711 kubelet[3545]: I0121 23:39:02.645654 3545 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 23:39:02.647535 kubelet[3545]: I0121 23:39:02.646283 3545 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 21 23:39:02.647535 kubelet[3545]: I0121 23:39:02.646620 3545 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 21 23:39:02.650646 kubelet[3545]: I0121 23:39:02.650599 3545 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 21 23:39:02.650747 kubelet[3545]: I0121 23:39:02.650739 3545 server.go:1289] "Started kubelet" Jan 21 23:39:02.654069 kubelet[3545]: I0121 23:39:02.652525 3545 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 23:39:02.654069 kubelet[3545]: I0121 23:39:02.652862 3545 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 23:39:02.654069 kubelet[3545]: I0121 23:39:02.652569 3545 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 23:39:02.654165 kubelet[3545]: I0121 23:39:02.654086 3545 server.go:317] "Adding debug handlers to kubelet server" Jan 21 23:39:02.654197 kubelet[3545]: I0121 23:39:02.652041 3545 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 23:39:02.655580 kubelet[3545]: I0121 23:39:02.655555 3545 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 21 23:39:02.656362 kubelet[3545]: I0121 23:39:02.656343 3545 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 21 23:39:02.659365 kubelet[3545]: E0121 23:39:02.657381 3545 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-n-a0ba06055b\" not found" Jan 21 23:39:02.659458 kubelet[3545]: I0121 23:39:02.657883 3545 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 21 23:39:02.659596 kubelet[3545]: I0121 23:39:02.659586 3545 reconciler.go:26] "Reconciler: start to sync state" Jan 21 23:39:02.661384 kubelet[3545]: I0121 23:39:02.661344 3545 factory.go:223] Registration of the systemd container factory successfully Jan 21 23:39:02.661475 kubelet[3545]: I0121 23:39:02.661456 3545 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 21 23:39:02.680995 kubelet[3545]: I0121 23:39:02.680888 3545 factory.go:223] Registration of the containerd container factory successfully Jan 21 23:39:02.686324 kubelet[3545]: I0121 23:39:02.686270 3545 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 21 23:39:02.688539 kubelet[3545]: I0121 23:39:02.688519 3545 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 21 23:39:02.688718 kubelet[3545]: I0121 23:39:02.688700 3545 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 21 23:39:02.688793 kubelet[3545]: I0121 23:39:02.688785 3545 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 21 23:39:02.688841 kubelet[3545]: I0121 23:39:02.688833 3545 kubelet.go:2436] "Starting kubelet main sync loop" Jan 21 23:39:02.688922 kubelet[3545]: E0121 23:39:02.688909 3545 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 23:39:02.689193 kubelet[3545]: E0121 23:39:02.689177 3545 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 21 23:39:02.721644 kubelet[3545]: I0121 23:39:02.721599 3545 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 21 23:39:02.721644 kubelet[3545]: I0121 23:39:02.721635 3545 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 21 23:39:02.721644 kubelet[3545]: I0121 23:39:02.721656 3545 state_mem.go:36] "Initialized new in-memory state store" Jan 21 23:39:02.721830 kubelet[3545]: I0121 23:39:02.721796 3545 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 21 23:39:02.721873 kubelet[3545]: I0121 23:39:02.721838 3545 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 21 23:39:02.721942 kubelet[3545]: I0121 23:39:02.721873 3545 policy_none.go:49] "None policy: Start" Jan 21 23:39:02.721942 kubelet[3545]: I0121 23:39:02.721883 3545 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 21 23:39:02.721942 kubelet[3545]: I0121 23:39:02.721892 3545 state_mem.go:35] "Initializing new in-memory state store" Jan 21 23:39:02.721994 kubelet[3545]: I0121 23:39:02.721962 3545 state_mem.go:75] "Updated machine memory state" Jan 21 23:39:02.725300 kubelet[3545]: E0121 23:39:02.725273 3545 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 21 23:39:02.725590 kubelet[3545]: I0121 23:39:02.725418 3545 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 23:39:02.725590 kubelet[3545]: I0121 23:39:02.725431 3545 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 23:39:02.725735 kubelet[3545]: I0121 23:39:02.725716 3545 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 23:39:02.727212 kubelet[3545]: E0121 23:39:02.726872 3545 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 21 23:39:02.789751 kubelet[3545]: I0121 23:39:02.789699 3545 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:02.791000 kubelet[3545]: I0121 23:39:02.790104 3545 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:02.791000 kubelet[3545]: I0121 23:39:02.790189 3545 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:02.800298 kubelet[3545]: I0121 23:39:02.800278 3545 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 21 23:39:02.800612 kubelet[3545]: I0121 23:39:02.800431 3545 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 21 23:39:02.800612 kubelet[3545]: E0121 23:39:02.800566 3545 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-n-a0ba06055b\" already exists" pod="kube-system/kube-scheduler-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:02.801207 kubelet[3545]: I0121 23:39:02.801181 3545 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 21 23:39:02.801338 kubelet[3545]: E0121 23:39:02.801299 3545 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-n-a0ba06055b\" already exists" pod="kube-system/kube-apiserver-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:02.831067 kubelet[3545]: I0121 23:39:02.830613 3545 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:02.840874 kubelet[3545]: I0121 23:39:02.840739 3545 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:02.841326 kubelet[3545]: I0121 23:39:02.841011 3545 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:02.860978 kubelet[3545]: I0121 23:39:02.860942 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/28780b5af20e6ebc4455e0047f95eeab-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-n-a0ba06055b\" (UID: \"28780b5af20e6ebc4455e0047f95eeab\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:02.860978 kubelet[3545]: I0121 23:39:02.860975 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2dc336c01ecde0ca770093d7f9e06fd1-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-n-a0ba06055b\" (UID: \"2dc336c01ecde0ca770093d7f9e06fd1\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:02.861124 kubelet[3545]: I0121 23:39:02.860987 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2dc336c01ecde0ca770093d7f9e06fd1-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-n-a0ba06055b\" (UID: \"2dc336c01ecde0ca770093d7f9e06fd1\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:02.861124 kubelet[3545]: I0121 23:39:02.861000 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2dc336c01ecde0ca770093d7f9e06fd1-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-n-a0ba06055b\" (UID: \"2dc336c01ecde0ca770093d7f9e06fd1\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:02.861124 kubelet[3545]: I0121 23:39:02.861010 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28780b5af20e6ebc4455e0047f95eeab-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-n-a0ba06055b\" (UID: \"28780b5af20e6ebc4455e0047f95eeab\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:02.861124 kubelet[3545]: I0121 23:39:02.861022 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28780b5af20e6ebc4455e0047f95eeab-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-n-a0ba06055b\" (UID: \"28780b5af20e6ebc4455e0047f95eeab\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:02.861124 kubelet[3545]: I0121 23:39:02.861033 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2dc336c01ecde0ca770093d7f9e06fd1-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-n-a0ba06055b\" (UID: \"2dc336c01ecde0ca770093d7f9e06fd1\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:02.861213 kubelet[3545]: I0121 23:39:02.861050 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2dc336c01ecde0ca770093d7f9e06fd1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-n-a0ba06055b\" (UID: \"2dc336c01ecde0ca770093d7f9e06fd1\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:02.861213 kubelet[3545]: I0121 23:39:02.861061 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a375bdb5c5225ec7a7f95eef18be5b4-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-n-a0ba06055b\" (UID: \"7a375bdb5c5225ec7a7f95eef18be5b4\") " pod="kube-system/kube-scheduler-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:03.649085 kubelet[3545]: I0121 23:39:03.648984 3545 apiserver.go:52] "Watching apiserver" Jan 21 23:39:03.659759 kubelet[3545]: I0121 23:39:03.659585 3545 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 21 23:39:03.714567 kubelet[3545]: I0121 23:39:03.714534 3545 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:03.724986 kubelet[3545]: I0121 23:39:03.724956 3545 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 21 23:39:03.725125 kubelet[3545]: E0121 23:39:03.725006 3545 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-n-a0ba06055b\" already exists" pod="kube-system/kube-apiserver-ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:03.729709 kubelet[3545]: I0121 23:39:03.729271 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4515.1.0-n-a0ba06055b" podStartSLOduration=2.7292456080000003 podStartE2EDuration="2.729245608s" podCreationTimestamp="2026-01-21 23:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 23:39:03.728883344 +0000 UTC m=+1.178743226" watchObservedRunningTime="2026-01-21 23:39:03.729245608 +0000 UTC m=+1.179105490" Jan 21 23:39:03.737954 kubelet[3545]: I0121 23:39:03.737884 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515.1.0-n-a0ba06055b" podStartSLOduration=1.737871937 podStartE2EDuration="1.737871937s" podCreationTimestamp="2026-01-21 23:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 23:39:03.737668953 +0000 UTC m=+1.187528835" watchObservedRunningTime="2026-01-21 23:39:03.737871937 +0000 UTC m=+1.187731851" Jan 21 23:39:03.755629 kubelet[3545]: I0121 23:39:03.755558 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515.1.0-n-a0ba06055b" podStartSLOduration=2.755544254 podStartE2EDuration="2.755544254s" podCreationTimestamp="2026-01-21 23:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 23:39:03.746588583 +0000 UTC m=+1.196448465" watchObservedRunningTime="2026-01-21 23:39:03.755544254 +0000 UTC m=+1.205404136" Jan 21 23:39:04.823144 update_engine[2086]: I20260121 23:39:04.823076 2086 update_attempter.cc:509] Updating boot flags... Jan 21 23:39:05.197878 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 21 23:39:09.124568 kubelet[3545]: I0121 23:39:09.124528 3545 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 21 23:39:09.125543 kubelet[3545]: I0121 23:39:09.125459 3545 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 21 23:39:09.125597 containerd[2113]: time="2026-01-21T23:39:09.125213659Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 21 23:39:10.128512 systemd[1]: Created slice kubepods-besteffort-pod1ed70e62_b1bf_4e49_a4b1_1f5a5832fae8.slice - libcontainer container kubepods-besteffort-pod1ed70e62_b1bf_4e49_a4b1_1f5a5832fae8.slice. Jan 21 23:39:10.209604 kubelet[3545]: I0121 23:39:10.209556 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1ed70e62-b1bf-4e49-a4b1-1f5a5832fae8-kube-proxy\") pod \"kube-proxy-ft9vm\" (UID: \"1ed70e62-b1bf-4e49-a4b1-1f5a5832fae8\") " pod="kube-system/kube-proxy-ft9vm" Jan 21 23:39:10.209604 kubelet[3545]: I0121 23:39:10.209606 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ed70e62-b1bf-4e49-a4b1-1f5a5832fae8-lib-modules\") pod \"kube-proxy-ft9vm\" (UID: \"1ed70e62-b1bf-4e49-a4b1-1f5a5832fae8\") " pod="kube-system/kube-proxy-ft9vm" Jan 21 23:39:10.209604 kubelet[3545]: I0121 23:39:10.209620 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp2lg\" (UniqueName: \"kubernetes.io/projected/1ed70e62-b1bf-4e49-a4b1-1f5a5832fae8-kube-api-access-cp2lg\") pod \"kube-proxy-ft9vm\" (UID: \"1ed70e62-b1bf-4e49-a4b1-1f5a5832fae8\") " pod="kube-system/kube-proxy-ft9vm" Jan 21 23:39:10.210034 kubelet[3545]: I0121 23:39:10.209634 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ed70e62-b1bf-4e49-a4b1-1f5a5832fae8-xtables-lock\") pod \"kube-proxy-ft9vm\" (UID: \"1ed70e62-b1bf-4e49-a4b1-1f5a5832fae8\") " pod="kube-system/kube-proxy-ft9vm" Jan 21 23:39:10.312124 systemd[1]: Created slice kubepods-besteffort-pod0cad5f3a_dc75_4496_9142_7ebbce957355.slice - libcontainer container kubepods-besteffort-pod0cad5f3a_dc75_4496_9142_7ebbce957355.slice. Jan 21 23:39:10.411125 kubelet[3545]: I0121 23:39:10.410979 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0cad5f3a-dc75-4496-9142-7ebbce957355-var-lib-calico\") pod \"tigera-operator-7dcd859c48-4m7qr\" (UID: \"0cad5f3a-dc75-4496-9142-7ebbce957355\") " pod="tigera-operator/tigera-operator-7dcd859c48-4m7qr" Jan 21 23:39:10.411125 kubelet[3545]: I0121 23:39:10.411022 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzk9c\" (UniqueName: \"kubernetes.io/projected/0cad5f3a-dc75-4496-9142-7ebbce957355-kube-api-access-rzk9c\") pod \"tigera-operator-7dcd859c48-4m7qr\" (UID: \"0cad5f3a-dc75-4496-9142-7ebbce957355\") " pod="tigera-operator/tigera-operator-7dcd859c48-4m7qr" Jan 21 23:39:10.436027 containerd[2113]: time="2026-01-21T23:39:10.435825021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ft9vm,Uid:1ed70e62-b1bf-4e49-a4b1-1f5a5832fae8,Namespace:kube-system,Attempt:0,}" Jan 21 23:39:10.473752 containerd[2113]: time="2026-01-21T23:39:10.473701884Z" level=info msg="connecting to shim e62a4b62409d2f9c748f8dc9af3cd512b3b96336e0ae5d178c725793caf02fee" address="unix:///run/containerd/s/0ec3e28857882301731b3474a1a88707d9d7e19c186728db53c4c5e7c6746c67" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:39:10.497305 systemd[1]: Started cri-containerd-e62a4b62409d2f9c748f8dc9af3cd512b3b96336e0ae5d178c725793caf02fee.scope - libcontainer container e62a4b62409d2f9c748f8dc9af3cd512b3b96336e0ae5d178c725793caf02fee. Jan 21 23:39:10.504000 audit: BPF prog-id=157 op=LOAD Jan 21 23:39:10.508896 kernel: kauditd_printk_skb: 200 callbacks suppressed Jan 21 23:39:10.509324 kernel: audit: type=1334 audit(1769038750.504:452): prog-id=157 op=LOAD Jan 21 23:39:10.505000 audit: BPF prog-id=158 op=LOAD Jan 21 23:39:10.517729 kernel: audit: type=1334 audit(1769038750.505:453): prog-id=158 op=LOAD Jan 21 23:39:10.505000 audit[3675]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.535006 kernel: audit: type=1300 audit(1769038750.505:453): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536326134623632343039643266396337343866386463396166336364 Jan 21 23:39:10.553556 kernel: audit: type=1327 audit(1769038750.505:453): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536326134623632343039643266396337343866386463396166336364 Jan 21 23:39:10.508000 audit: BPF prog-id=158 op=UNLOAD Jan 21 23:39:10.560068 kernel: audit: type=1334 audit(1769038750.508:454): prog-id=158 op=UNLOAD Jan 21 23:39:10.508000 audit[3675]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.575613 kernel: audit: type=1300 audit(1769038750.508:454): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.508000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536326134623632343039643266396337343866386463396166336364 Jan 21 23:39:10.591953 kernel: audit: type=1327 audit(1769038750.508:454): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536326134623632343039643266396337343866386463396166336364 Jan 21 23:39:10.508000 audit: BPF prog-id=159 op=LOAD Jan 21 23:39:10.596984 kernel: audit: type=1334 audit(1769038750.508:455): prog-id=159 op=LOAD Jan 21 23:39:10.508000 audit[3675]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.614066 kernel: audit: type=1300 audit(1769038750.508:455): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.508000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536326134623632343039643266396337343866386463396166336364 Jan 21 23:39:10.616801 containerd[2113]: time="2026-01-21T23:39:10.616603061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-4m7qr,Uid:0cad5f3a-dc75-4496-9142-7ebbce957355,Namespace:tigera-operator,Attempt:0,}" Jan 21 23:39:10.631847 kernel: audit: type=1327 audit(1769038750.508:455): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536326134623632343039643266396337343866386463396166336364 Jan 21 23:39:10.512000 audit: BPF prog-id=160 op=LOAD Jan 21 23:39:10.512000 audit[3675]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536326134623632343039643266396337343866386463396166336364 Jan 21 23:39:10.512000 audit: BPF prog-id=160 op=UNLOAD Jan 21 23:39:10.512000 audit[3675]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536326134623632343039643266396337343866386463396166336364 Jan 21 23:39:10.512000 audit: BPF prog-id=159 op=UNLOAD Jan 21 23:39:10.512000 audit[3675]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536326134623632343039643266396337343866386463396166336364 Jan 21 23:39:10.512000 audit: BPF prog-id=161 op=LOAD Jan 21 23:39:10.512000 audit[3675]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536326134623632343039643266396337343866386463396166336364 Jan 21 23:39:10.645131 containerd[2113]: time="2026-01-21T23:39:10.645089837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ft9vm,Uid:1ed70e62-b1bf-4e49-a4b1-1f5a5832fae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"e62a4b62409d2f9c748f8dc9af3cd512b3b96336e0ae5d178c725793caf02fee\"" Jan 21 23:39:10.657275 containerd[2113]: time="2026-01-21T23:39:10.657104190Z" level=info msg="CreateContainer within sandbox \"e62a4b62409d2f9c748f8dc9af3cd512b3b96336e0ae5d178c725793caf02fee\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 21 23:39:10.659482 containerd[2113]: time="2026-01-21T23:39:10.659455734Z" level=info msg="connecting to shim c0bf718dcbd2511f6bf2e231b431ba776406e8ace789c3d628b0cf0101ae029b" address="unix:///run/containerd/s/9c4758f1cc81e33d84187840739afb24a15be3909f4064d3aaecde5c199506e9" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:39:10.676333 containerd[2113]: time="2026-01-21T23:39:10.676245457Z" level=info msg="Container 6dc8f9e6667feac9e85ace4c03f60ba11fab00923b3d52093d8ce1c439b760a7: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:39:10.681329 systemd[1]: Started cri-containerd-c0bf718dcbd2511f6bf2e231b431ba776406e8ace789c3d628b0cf0101ae029b.scope - libcontainer container c0bf718dcbd2511f6bf2e231b431ba776406e8ace789c3d628b0cf0101ae029b. Jan 21 23:39:10.694825 containerd[2113]: time="2026-01-21T23:39:10.694775165Z" level=info msg="CreateContainer within sandbox \"e62a4b62409d2f9c748f8dc9af3cd512b3b96336e0ae5d178c725793caf02fee\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6dc8f9e6667feac9e85ace4c03f60ba11fab00923b3d52093d8ce1c439b760a7\"" Jan 21 23:39:10.697000 audit: BPF prog-id=162 op=LOAD Jan 21 23:39:10.699986 containerd[2113]: time="2026-01-21T23:39:10.699866979Z" level=info msg="StartContainer for \"6dc8f9e6667feac9e85ace4c03f60ba11fab00923b3d52093d8ce1c439b760a7\"" Jan 21 23:39:10.699000 audit: BPF prog-id=163 op=LOAD Jan 21 23:39:10.699000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=3711 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.699000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330626637313864636264323531316636626632653233316234333162 Jan 21 23:39:10.699000 audit: BPF prog-id=163 op=UNLOAD Jan 21 23:39:10.699000 audit[3722]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3711 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.699000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330626637313864636264323531316636626632653233316234333162 Jan 21 23:39:10.700000 audit: BPF prog-id=164 op=LOAD Jan 21 23:39:10.700000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=3711 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330626637313864636264323531316636626632653233316234333162 Jan 21 23:39:10.700000 audit: BPF prog-id=165 op=LOAD Jan 21 23:39:10.700000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=3711 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330626637313864636264323531316636626632653233316234333162 Jan 21 23:39:10.701000 audit: BPF prog-id=165 op=UNLOAD Jan 21 23:39:10.701000 audit[3722]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3711 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330626637313864636264323531316636626632653233316234333162 Jan 21 23:39:10.701000 audit: BPF prog-id=164 op=UNLOAD Jan 21 23:39:10.701000 audit[3722]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3711 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330626637313864636264323531316636626632653233316234333162 Jan 21 23:39:10.701000 audit: BPF prog-id=166 op=LOAD Jan 21 23:39:10.701000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=3711 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330626637313864636264323531316636626632653233316234333162 Jan 21 23:39:10.704217 containerd[2113]: time="2026-01-21T23:39:10.704178492Z" level=info msg="connecting to shim 6dc8f9e6667feac9e85ace4c03f60ba11fab00923b3d52093d8ce1c439b760a7" address="unix:///run/containerd/s/0ec3e28857882301731b3474a1a88707d9d7e19c186728db53c4c5e7c6746c67" protocol=ttrpc version=3 Jan 21 23:39:10.727494 systemd[1]: Started cri-containerd-6dc8f9e6667feac9e85ace4c03f60ba11fab00923b3d52093d8ce1c439b760a7.scope - libcontainer container 6dc8f9e6667feac9e85ace4c03f60ba11fab00923b3d52093d8ce1c439b760a7. Jan 21 23:39:10.739093 containerd[2113]: time="2026-01-21T23:39:10.739039706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-4m7qr,Uid:0cad5f3a-dc75-4496-9142-7ebbce957355,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c0bf718dcbd2511f6bf2e231b431ba776406e8ace789c3d628b0cf0101ae029b\"" Jan 21 23:39:10.741290 containerd[2113]: time="2026-01-21T23:39:10.741250141Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 21 23:39:10.788000 audit: BPF prog-id=167 op=LOAD Jan 21 23:39:10.788000 audit[3742]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3663 pid=3742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664633866396536363637666561633965383561636534633033663630 Jan 21 23:39:10.788000 audit: BPF prog-id=168 op=LOAD Jan 21 23:39:10.788000 audit[3742]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3663 pid=3742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664633866396536363637666561633965383561636534633033663630 Jan 21 23:39:10.788000 audit: BPF prog-id=168 op=UNLOAD Jan 21 23:39:10.788000 audit[3742]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3663 pid=3742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664633866396536363637666561633965383561636534633033663630 Jan 21 23:39:10.788000 audit: BPF prog-id=167 op=UNLOAD Jan 21 23:39:10.788000 audit[3742]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3663 pid=3742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664633866396536363637666561633965383561636534633033663630 Jan 21 23:39:10.788000 audit: BPF prog-id=169 op=LOAD Jan 21 23:39:10.788000 audit[3742]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3663 pid=3742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664633866396536363637666561633965383561636534633033663630 Jan 21 23:39:10.809314 containerd[2113]: time="2026-01-21T23:39:10.809272362Z" level=info msg="StartContainer for \"6dc8f9e6667feac9e85ace4c03f60ba11fab00923b3d52093d8ce1c439b760a7\" returns successfully" Jan 21 23:39:10.910000 audit[3813]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3813 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:10.910000 audit[3813]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc95c43e0 a2=0 a3=1 items=0 ppid=3761 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.910000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 21 23:39:10.910000 audit[3812]: NETFILTER_CFG table=mangle:58 family=2 entries=1 op=nft_register_chain pid=3812 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.910000 audit[3812]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe97b83d0 a2=0 a3=1 items=0 ppid=3761 pid=3812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.910000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 21 23:39:10.912000 audit[3815]: NETFILTER_CFG table=nat:59 family=10 entries=1 op=nft_register_chain pid=3815 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:10.912000 audit[3815]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe759e4d0 a2=0 a3=1 items=0 ppid=3761 pid=3815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.912000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 21 23:39:10.913000 audit[3816]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=3816 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:10.913000 audit[3816]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcd4caa00 a2=0 a3=1 items=0 ppid=3761 pid=3816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.913000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 21 23:39:10.914000 audit[3817]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3817 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.914000 audit[3817]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff3a80020 a2=0 a3=1 items=0 ppid=3761 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.914000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 21 23:39:10.916000 audit[3819]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_chain pid=3819 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.916000 audit[3819]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffca7adae0 a2=0 a3=1 items=0 ppid=3761 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.916000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 21 23:39:10.922000 audit[3821]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3821 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.922000 audit[3821]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffce6695f0 a2=0 a3=1 items=0 ppid=3761 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.922000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 21 23:39:10.925000 audit[3823]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3823 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.925000 audit[3823]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe036aea0 a2=0 a3=1 items=0 ppid=3761 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.925000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 21 23:39:10.929000 audit[3826]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=3826 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.929000 audit[3826]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffda8f7340 a2=0 a3=1 items=0 ppid=3761 pid=3826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.929000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 21 23:39:10.930000 audit[3827]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=3827 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.930000 audit[3827]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee6c88b0 a2=0 a3=1 items=0 ppid=3761 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.930000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 21 23:39:10.933000 audit[3829]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3829 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.933000 audit[3829]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff6f6dbf0 a2=0 a3=1 items=0 ppid=3761 pid=3829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.933000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 21 23:39:10.934000 audit[3830]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3830 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.934000 audit[3830]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc69b68d0 a2=0 a3=1 items=0 ppid=3761 pid=3830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.934000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 21 23:39:10.936000 audit[3832]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3832 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.936000 audit[3832]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffffd9f8d0 a2=0 a3=1 items=0 ppid=3761 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.936000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 21 23:39:10.939000 audit[3835]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=3835 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.939000 audit[3835]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff3cf0670 a2=0 a3=1 items=0 ppid=3761 pid=3835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.939000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 21 23:39:10.940000 audit[3836]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=3836 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.940000 audit[3836]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd05ba340 a2=0 a3=1 items=0 ppid=3761 pid=3836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.940000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 21 23:39:10.942000 audit[3838]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3838 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.942000 audit[3838]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffa659dc0 a2=0 a3=1 items=0 ppid=3761 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.942000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 21 23:39:10.943000 audit[3839]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=3839 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.943000 audit[3839]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffc999980 a2=0 a3=1 items=0 ppid=3761 pid=3839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.943000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 21 23:39:10.946000 audit[3841]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=3841 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.946000 audit[3841]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffec186fe0 a2=0 a3=1 items=0 ppid=3761 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.946000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 21 23:39:10.949000 audit[3844]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=3844 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.949000 audit[3844]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd66af2d0 a2=0 a3=1 items=0 ppid=3761 pid=3844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.949000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 21 23:39:10.952000 audit[3847]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=3847 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.952000 audit[3847]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffec8096e0 a2=0 a3=1 items=0 ppid=3761 pid=3847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.952000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 21 23:39:10.953000 audit[3848]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3848 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.953000 audit[3848]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc1866920 a2=0 a3=1 items=0 ppid=3761 pid=3848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.953000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 21 23:39:10.955000 audit[3850]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3850 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.955000 audit[3850]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff0ec2d10 a2=0 a3=1 items=0 ppid=3761 pid=3850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.955000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 23:39:10.959000 audit[3853]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=3853 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.959000 audit[3853]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd62b7d10 a2=0 a3=1 items=0 ppid=3761 pid=3853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.959000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 23:39:10.960000 audit[3854]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=3854 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.960000 audit[3854]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2535dc0 a2=0 a3=1 items=0 ppid=3761 pid=3854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.960000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 21 23:39:10.962000 audit[3856]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=3856 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:39:10.962000 audit[3856]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffec5f80c0 a2=0 a3=1 items=0 ppid=3761 pid=3856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.962000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 21 23:39:10.989000 audit[3862]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=3862 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:10.989000 audit[3862]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffed1faa10 a2=0 a3=1 items=0 ppid=3761 pid=3862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.989000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:10.999000 audit[3862]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=3862 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:10.999000 audit[3862]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffed1faa10 a2=0 a3=1 items=0 ppid=3761 pid=3862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:10.999000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:11.001000 audit[3867]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3867 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.001000 audit[3867]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc71ef3f0 a2=0 a3=1 items=0 ppid=3761 pid=3867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.001000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 21 23:39:11.003000 audit[3869]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=3869 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.003000 audit[3869]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe0e36a20 a2=0 a3=1 items=0 ppid=3761 pid=3869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.003000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 21 23:39:11.006000 audit[3872]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3872 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.006000 audit[3872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd2acf790 a2=0 a3=1 items=0 ppid=3761 pid=3872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.006000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 21 23:39:11.008000 audit[3873]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3873 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.008000 audit[3873]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd0cec5d0 a2=0 a3=1 items=0 ppid=3761 pid=3873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.008000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 21 23:39:11.010000 audit[3875]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3875 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.010000 audit[3875]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe9931100 a2=0 a3=1 items=0 ppid=3761 pid=3875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.010000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 21 23:39:11.011000 audit[3876]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3876 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.011000 audit[3876]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe7e5c0a0 a2=0 a3=1 items=0 ppid=3761 pid=3876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.011000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 21 23:39:11.014000 audit[3878]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3878 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.014000 audit[3878]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe5beb530 a2=0 a3=1 items=0 ppid=3761 pid=3878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.014000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 21 23:39:11.019000 audit[3881]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=3881 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.019000 audit[3881]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffce410850 a2=0 a3=1 items=0 ppid=3761 pid=3881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.019000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 21 23:39:11.020000 audit[3882]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=3882 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.020000 audit[3882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc8e6d760 a2=0 a3=1 items=0 ppid=3761 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.020000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 21 23:39:11.022000 audit[3884]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3884 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.022000 audit[3884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffb805040 a2=0 a3=1 items=0 ppid=3761 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.022000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 21 23:39:11.023000 audit[3885]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=3885 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.023000 audit[3885]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffead46670 a2=0 a3=1 items=0 ppid=3761 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.023000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 21 23:39:11.025000 audit[3887]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=3887 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.025000 audit[3887]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd21a26b0 a2=0 a3=1 items=0 ppid=3761 pid=3887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.025000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 21 23:39:11.028000 audit[3890]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=3890 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.028000 audit[3890]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd369e6b0 a2=0 a3=1 items=0 ppid=3761 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.028000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 21 23:39:11.031000 audit[3893]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=3893 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.031000 audit[3893]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe80ee430 a2=0 a3=1 items=0 ppid=3761 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.031000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 21 23:39:11.032000 audit[3894]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3894 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.032000 audit[3894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffee500e50 a2=0 a3=1 items=0 ppid=3761 pid=3894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.032000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 21 23:39:11.034000 audit[3896]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=3896 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.034000 audit[3896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffdb689a30 a2=0 a3=1 items=0 ppid=3761 pid=3896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.034000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 23:39:11.038000 audit[3899]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=3899 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.038000 audit[3899]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffec829610 a2=0 a3=1 items=0 ppid=3761 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.038000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 23:39:11.040000 audit[3900]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=3900 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.040000 audit[3900]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9df5bf0 a2=0 a3=1 items=0 ppid=3761 pid=3900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.040000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 21 23:39:11.042000 audit[3902]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=3902 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.042000 audit[3902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe0b8b710 a2=0 a3=1 items=0 ppid=3761 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.042000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 21 23:39:11.043000 audit[3903]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=3903 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.043000 audit[3903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc9270f00 a2=0 a3=1 items=0 ppid=3761 pid=3903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.043000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 21 23:39:11.046000 audit[3905]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=3905 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.046000 audit[3905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff796d8b0 a2=0 a3=1 items=0 ppid=3761 pid=3905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.046000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 23:39:11.049000 audit[3908]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=3908 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:39:11.049000 audit[3908]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff92895b0 a2=0 a3=1 items=0 ppid=3761 pid=3908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.049000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 23:39:11.051000 audit[3910]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=3910 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 21 23:39:11.051000 audit[3910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffdcec3750 a2=0 a3=1 items=0 ppid=3761 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.051000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:11.052000 audit[3910]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=3910 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 21 23:39:11.052000 audit[3910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffdcec3750 a2=0 a3=1 items=0 ppid=3761 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:11.052000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:11.750509 kubelet[3545]: I0121 23:39:11.750427 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ft9vm" podStartSLOduration=1.7504123169999999 podStartE2EDuration="1.750412317s" podCreationTimestamp="2026-01-21 23:39:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 23:39:11.750353274 +0000 UTC m=+9.200213156" watchObservedRunningTime="2026-01-21 23:39:11.750412317 +0000 UTC m=+9.200272199" Jan 21 23:39:12.354797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount350477530.mount: Deactivated successfully. Jan 21 23:39:12.810935 containerd[2113]: time="2026-01-21T23:39:12.810872392Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:39:12.813468 containerd[2113]: time="2026-01-21T23:39:12.813402766Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 21 23:39:12.815742 containerd[2113]: time="2026-01-21T23:39:12.815712804Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:39:12.819535 containerd[2113]: time="2026-01-21T23:39:12.819486281Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:39:12.820616 containerd[2113]: time="2026-01-21T23:39:12.820503351Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.079216497s" Jan 21 23:39:12.820616 containerd[2113]: time="2026-01-21T23:39:12.820532792Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 21 23:39:12.829529 containerd[2113]: time="2026-01-21T23:39:12.829493871Z" level=info msg="CreateContainer within sandbox \"c0bf718dcbd2511f6bf2e231b431ba776406e8ace789c3d628b0cf0101ae029b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 21 23:39:12.844584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3754881552.mount: Deactivated successfully. Jan 21 23:39:12.846066 containerd[2113]: time="2026-01-21T23:39:12.845824793Z" level=info msg="Container 8f25c035ecba75cad0bfacb7b509ce77683d1ea3f28ee94e9e4bfe0eefbf7ad6: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:39:12.878406 containerd[2113]: time="2026-01-21T23:39:12.878353568Z" level=info msg="CreateContainer within sandbox \"c0bf718dcbd2511f6bf2e231b431ba776406e8ace789c3d628b0cf0101ae029b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8f25c035ecba75cad0bfacb7b509ce77683d1ea3f28ee94e9e4bfe0eefbf7ad6\"" Jan 21 23:39:12.879066 containerd[2113]: time="2026-01-21T23:39:12.878984744Z" level=info msg="StartContainer for \"8f25c035ecba75cad0bfacb7b509ce77683d1ea3f28ee94e9e4bfe0eefbf7ad6\"" Jan 21 23:39:12.881859 containerd[2113]: time="2026-01-21T23:39:12.881805425Z" level=info msg="connecting to shim 8f25c035ecba75cad0bfacb7b509ce77683d1ea3f28ee94e9e4bfe0eefbf7ad6" address="unix:///run/containerd/s/9c4758f1cc81e33d84187840739afb24a15be3909f4064d3aaecde5c199506e9" protocol=ttrpc version=3 Jan 21 23:39:12.898246 systemd[1]: Started cri-containerd-8f25c035ecba75cad0bfacb7b509ce77683d1ea3f28ee94e9e4bfe0eefbf7ad6.scope - libcontainer container 8f25c035ecba75cad0bfacb7b509ce77683d1ea3f28ee94e9e4bfe0eefbf7ad6. Jan 21 23:39:12.906000 audit: BPF prog-id=170 op=LOAD Jan 21 23:39:12.907000 audit: BPF prog-id=171 op=LOAD Jan 21 23:39:12.907000 audit[3921]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000174180 a2=98 a3=0 items=0 ppid=3711 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:12.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866323563303335656362613735636164306266616362376235303963 Jan 21 23:39:12.907000 audit: BPF prog-id=171 op=UNLOAD Jan 21 23:39:12.907000 audit[3921]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3711 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:12.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866323563303335656362613735636164306266616362376235303963 Jan 21 23:39:12.907000 audit: BPF prog-id=172 op=LOAD Jan 21 23:39:12.907000 audit[3921]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001743e8 a2=98 a3=0 items=0 ppid=3711 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:12.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866323563303335656362613735636164306266616362376235303963 Jan 21 23:39:12.907000 audit: BPF prog-id=173 op=LOAD Jan 21 23:39:12.907000 audit[3921]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000174168 a2=98 a3=0 items=0 ppid=3711 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:12.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866323563303335656362613735636164306266616362376235303963 Jan 21 23:39:12.907000 audit: BPF prog-id=173 op=UNLOAD Jan 21 23:39:12.907000 audit[3921]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3711 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:12.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866323563303335656362613735636164306266616362376235303963 Jan 21 23:39:12.907000 audit: BPF prog-id=172 op=UNLOAD Jan 21 23:39:12.907000 audit[3921]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3711 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:12.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866323563303335656362613735636164306266616362376235303963 Jan 21 23:39:12.907000 audit: BPF prog-id=174 op=LOAD Jan 21 23:39:12.907000 audit[3921]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000174648 a2=98 a3=0 items=0 ppid=3711 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:12.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866323563303335656362613735636164306266616362376235303963 Jan 21 23:39:12.925835 containerd[2113]: time="2026-01-21T23:39:12.925790948Z" level=info msg="StartContainer for \"8f25c035ecba75cad0bfacb7b509ce77683d1ea3f28ee94e9e4bfe0eefbf7ad6\" returns successfully" Jan 21 23:39:13.755432 kubelet[3545]: I0121 23:39:13.755278 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-4m7qr" podStartSLOduration=1.674528818 podStartE2EDuration="3.755258867s" podCreationTimestamp="2026-01-21 23:39:10 +0000 UTC" firstStartedPulling="2026-01-21 23:39:10.74067916 +0000 UTC m=+8.190539042" lastFinishedPulling="2026-01-21 23:39:12.821409209 +0000 UTC m=+10.271269091" observedRunningTime="2026-01-21 23:39:13.755146855 +0000 UTC m=+11.205006737" watchObservedRunningTime="2026-01-21 23:39:13.755258867 +0000 UTC m=+11.205118749" Jan 21 23:39:18.042489 sudo[2565]: pam_unix(sudo:session): session closed for user root Jan 21 23:39:18.062933 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 21 23:39:18.063116 kernel: audit: type=1106 audit(1769038758.041:532): pid=2565 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:39:18.041000 audit[2565]: USER_END pid=2565 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:39:18.041000 audit[2565]: CRED_DISP pid=2565 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:39:18.085513 kernel: audit: type=1104 audit(1769038758.041:533): pid=2565 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:39:18.122053 sshd[2564]: Connection closed by 10.200.16.10 port 37946 Jan 21 23:39:18.124970 sshd-session[2561]: pam_unix(sshd:session): session closed for user core Jan 21 23:39:18.125000 audit[2561]: USER_END pid=2561 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:39:18.149923 systemd[1]: sshd@6-10.200.20.29:22-10.200.16.10:37946.service: Deactivated successfully. Jan 21 23:39:18.125000 audit[2561]: CRED_DISP pid=2561 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:39:18.170453 kernel: audit: type=1106 audit(1769038758.125:534): pid=2561 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:39:18.170552 kernel: audit: type=1104 audit(1769038758.125:535): pid=2561 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:39:18.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.29:22-10.200.16.10:37946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:39:18.171969 systemd[1]: session-9.scope: Deactivated successfully. Jan 21 23:39:18.172287 systemd[1]: session-9.scope: Consumed 3.544s CPU time, 226.1M memory peak. Jan 21 23:39:18.175374 systemd-logind[2082]: Session 9 logged out. Waiting for processes to exit. Jan 21 23:39:18.179513 systemd-logind[2082]: Removed session 9. Jan 21 23:39:18.185464 kernel: audit: type=1131 audit(1769038758.151:536): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.29:22-10.200.16.10:37946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:39:19.740000 audit[3999]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=3999 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:19.740000 audit[3999]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff32fb160 a2=0 a3=1 items=0 ppid=3761 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:19.827938 kernel: audit: type=1325 audit(1769038759.740:537): table=filter:108 family=2 entries=15 op=nft_register_rule pid=3999 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:19.828091 kernel: audit: type=1300 audit(1769038759.740:537): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff32fb160 a2=0 a3=1 items=0 ppid=3761 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:19.740000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:19.843483 kernel: audit: type=1327 audit(1769038759.740:537): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:19.844000 audit[3999]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=3999 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:19.844000 audit[3999]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff32fb160 a2=0 a3=1 items=0 ppid=3761 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:19.879073 kernel: audit: type=1325 audit(1769038759.844:538): table=nat:109 family=2 entries=12 op=nft_register_rule pid=3999 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:19.879204 kernel: audit: type=1300 audit(1769038759.844:538): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff32fb160 a2=0 a3=1 items=0 ppid=3761 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:19.844000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:19.920000 audit[4001]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4001 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:19.920000 audit[4001]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe53d9030 a2=0 a3=1 items=0 ppid=3761 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:19.920000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:19.925000 audit[4001]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4001 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:19.925000 audit[4001]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe53d9030 a2=0 a3=1 items=0 ppid=3761 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:19.925000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:22.550000 audit[4003]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4003 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:22.550000 audit[4003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffcf365cc0 a2=0 a3=1 items=0 ppid=3761 pid=4003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:22.550000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:22.557000 audit[4003]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4003 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:22.557000 audit[4003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcf365cc0 a2=0 a3=1 items=0 ppid=3761 pid=4003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:22.557000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:22.565000 audit[4005]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4005 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:22.565000 audit[4005]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe1e73550 a2=0 a3=1 items=0 ppid=3761 pid=4005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:22.565000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:22.568000 audit[4005]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4005 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:22.568000 audit[4005]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe1e73550 a2=0 a3=1 items=0 ppid=3761 pid=4005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:22.568000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:23.597000 audit[4009]: NETFILTER_CFG table=filter:116 family=2 entries=19 op=nft_register_rule pid=4009 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:23.611978 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 21 23:39:23.612158 kernel: audit: type=1325 audit(1769038763.597:545): table=filter:116 family=2 entries=19 op=nft_register_rule pid=4009 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:23.597000 audit[4009]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffca295380 a2=0 a3=1 items=0 ppid=3761 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:23.630454 kernel: audit: type=1300 audit(1769038763.597:545): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffca295380 a2=0 a3=1 items=0 ppid=3761 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:23.597000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:23.639920 kernel: audit: type=1327 audit(1769038763.597:545): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:23.639000 audit[4009]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4009 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:23.639000 audit[4009]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffca295380 a2=0 a3=1 items=0 ppid=3761 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:23.672287 kernel: audit: type=1325 audit(1769038763.639:546): table=nat:117 family=2 entries=12 op=nft_register_rule pid=4009 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:23.672382 kernel: audit: type=1300 audit(1769038763.639:546): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffca295380 a2=0 a3=1 items=0 ppid=3761 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:23.639000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:23.682161 kernel: audit: type=1327 audit(1769038763.639:546): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:24.426739 systemd[1]: Created slice kubepods-besteffort-pod74f553f4_b90e_4e87_9706_21ae4498bf1a.slice - libcontainer container kubepods-besteffort-pod74f553f4_b90e_4e87_9706_21ae4498bf1a.slice. Jan 21 23:39:24.477573 kubelet[3545]: I0121 23:39:24.477503 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74f553f4-b90e-4e87-9706-21ae4498bf1a-tigera-ca-bundle\") pod \"calico-typha-6676c5b97d-ljbgj\" (UID: \"74f553f4-b90e-4e87-9706-21ae4498bf1a\") " pod="calico-system/calico-typha-6676c5b97d-ljbgj" Jan 21 23:39:24.478223 kubelet[3545]: I0121 23:39:24.477677 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/74f553f4-b90e-4e87-9706-21ae4498bf1a-typha-certs\") pod \"calico-typha-6676c5b97d-ljbgj\" (UID: \"74f553f4-b90e-4e87-9706-21ae4498bf1a\") " pod="calico-system/calico-typha-6676c5b97d-ljbgj" Jan 21 23:39:24.478223 kubelet[3545]: I0121 23:39:24.477703 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59nsw\" (UniqueName: \"kubernetes.io/projected/74f553f4-b90e-4e87-9706-21ae4498bf1a-kube-api-access-59nsw\") pod \"calico-typha-6676c5b97d-ljbgj\" (UID: \"74f553f4-b90e-4e87-9706-21ae4498bf1a\") " pod="calico-system/calico-typha-6676c5b97d-ljbgj" Jan 21 23:39:24.610790 systemd[1]: Created slice kubepods-besteffort-pod114e7040_ab80_4008_bfcf_0117849bf2c5.slice - libcontainer container kubepods-besteffort-pod114e7040_ab80_4008_bfcf_0117849bf2c5.slice. Jan 21 23:39:24.656000 audit[4013]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4013 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:24.656000 audit[4013]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe038e3d0 a2=0 a3=1 items=0 ppid=3761 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:24.688152 kernel: audit: type=1325 audit(1769038764.656:547): table=filter:118 family=2 entries=21 op=nft_register_rule pid=4013 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:24.688242 kernel: audit: type=1300 audit(1769038764.656:547): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe038e3d0 a2=0 a3=1 items=0 ppid=3761 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:24.688307 kubelet[3545]: I0121 23:39:24.679627 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/114e7040-ab80-4008-bfcf-0117849bf2c5-node-certs\") pod \"calico-node-78df5\" (UID: \"114e7040-ab80-4008-bfcf-0117849bf2c5\") " pod="calico-system/calico-node-78df5" Jan 21 23:39:24.688307 kubelet[3545]: I0121 23:39:24.679671 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/114e7040-ab80-4008-bfcf-0117849bf2c5-flexvol-driver-host\") pod \"calico-node-78df5\" (UID: \"114e7040-ab80-4008-bfcf-0117849bf2c5\") " pod="calico-system/calico-node-78df5" Jan 21 23:39:24.688307 kubelet[3545]: I0121 23:39:24.679683 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/114e7040-ab80-4008-bfcf-0117849bf2c5-policysync\") pod \"calico-node-78df5\" (UID: \"114e7040-ab80-4008-bfcf-0117849bf2c5\") " pod="calico-system/calico-node-78df5" Jan 21 23:39:24.688307 kubelet[3545]: I0121 23:39:24.679713 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzkk7\" (UniqueName: \"kubernetes.io/projected/114e7040-ab80-4008-bfcf-0117849bf2c5-kube-api-access-nzkk7\") pod \"calico-node-78df5\" (UID: \"114e7040-ab80-4008-bfcf-0117849bf2c5\") " pod="calico-system/calico-node-78df5" Jan 21 23:39:24.688307 kubelet[3545]: I0121 23:39:24.679725 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/114e7040-ab80-4008-bfcf-0117849bf2c5-lib-modules\") pod \"calico-node-78df5\" (UID: \"114e7040-ab80-4008-bfcf-0117849bf2c5\") " pod="calico-system/calico-node-78df5" Jan 21 23:39:24.688437 kubelet[3545]: I0121 23:39:24.679733 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/114e7040-ab80-4008-bfcf-0117849bf2c5-tigera-ca-bundle\") pod \"calico-node-78df5\" (UID: \"114e7040-ab80-4008-bfcf-0117849bf2c5\") " pod="calico-system/calico-node-78df5" Jan 21 23:39:24.688437 kubelet[3545]: I0121 23:39:24.679759 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/114e7040-ab80-4008-bfcf-0117849bf2c5-var-run-calico\") pod \"calico-node-78df5\" (UID: \"114e7040-ab80-4008-bfcf-0117849bf2c5\") " pod="calico-system/calico-node-78df5" Jan 21 23:39:24.688437 kubelet[3545]: I0121 23:39:24.679770 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/114e7040-ab80-4008-bfcf-0117849bf2c5-cni-bin-dir\") pod \"calico-node-78df5\" (UID: \"114e7040-ab80-4008-bfcf-0117849bf2c5\") " pod="calico-system/calico-node-78df5" Jan 21 23:39:24.688437 kubelet[3545]: I0121 23:39:24.679778 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/114e7040-ab80-4008-bfcf-0117849bf2c5-cni-log-dir\") pod \"calico-node-78df5\" (UID: \"114e7040-ab80-4008-bfcf-0117849bf2c5\") " pod="calico-system/calico-node-78df5" Jan 21 23:39:24.688437 kubelet[3545]: I0121 23:39:24.679788 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/114e7040-ab80-4008-bfcf-0117849bf2c5-cni-net-dir\") pod \"calico-node-78df5\" (UID: \"114e7040-ab80-4008-bfcf-0117849bf2c5\") " pod="calico-system/calico-node-78df5" Jan 21 23:39:24.688513 kubelet[3545]: I0121 23:39:24.679799 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/114e7040-ab80-4008-bfcf-0117849bf2c5-xtables-lock\") pod \"calico-node-78df5\" (UID: \"114e7040-ab80-4008-bfcf-0117849bf2c5\") " pod="calico-system/calico-node-78df5" Jan 21 23:39:24.688513 kubelet[3545]: I0121 23:39:24.679809 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/114e7040-ab80-4008-bfcf-0117849bf2c5-var-lib-calico\") pod \"calico-node-78df5\" (UID: \"114e7040-ab80-4008-bfcf-0117849bf2c5\") " pod="calico-system/calico-node-78df5" Jan 21 23:39:24.656000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:24.698802 kernel: audit: type=1327 audit(1769038764.656:547): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:24.667000 audit[4013]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4013 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:24.709132 kernel: audit: type=1325 audit(1769038764.667:548): table=nat:119 family=2 entries=12 op=nft_register_rule pid=4013 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:24.667000 audit[4013]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe038e3d0 a2=0 a3=1 items=0 ppid=3761 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:24.667000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:24.732752 containerd[2113]: time="2026-01-21T23:39:24.732708048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6676c5b97d-ljbgj,Uid:74f553f4-b90e-4e87-9706-21ae4498bf1a,Namespace:calico-system,Attempt:0,}" Jan 21 23:39:24.771236 containerd[2113]: time="2026-01-21T23:39:24.770331090Z" level=info msg="connecting to shim 623d4e1c90ec5ea976bb64a767e2cfa2b535232f422540db39514a64db666e30" address="unix:///run/containerd/s/b39456a96d82d5d525ac59e9521c36ae55e2c9986b8fa9ad9fbbb6d5cda9aed7" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:39:24.797975 kubelet[3545]: E0121 23:39:24.797930 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.798646 kubelet[3545]: W0121 23:39:24.798625 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.798744 kubelet[3545]: E0121 23:39:24.798732 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.804344 kubelet[3545]: E0121 23:39:24.804158 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.804344 kubelet[3545]: W0121 23:39:24.804178 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.804344 kubelet[3545]: E0121 23:39:24.804197 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.812684 kubelet[3545]: E0121 23:39:24.812651 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmwkp" podUID="03532856-4a1c-4971-af49-0f675b6cbf1f" Jan 21 23:39:24.813384 systemd[1]: Started cri-containerd-623d4e1c90ec5ea976bb64a767e2cfa2b535232f422540db39514a64db666e30.scope - libcontainer container 623d4e1c90ec5ea976bb64a767e2cfa2b535232f422540db39514a64db666e30. Jan 21 23:39:24.841000 audit: BPF prog-id=175 op=LOAD Jan 21 23:39:24.842000 audit: BPF prog-id=176 op=LOAD Jan 21 23:39:24.842000 audit[4034]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:24.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632336434653163393065633565613937366262363461373637653263 Jan 21 23:39:24.842000 audit: BPF prog-id=176 op=UNLOAD Jan 21 23:39:24.842000 audit[4034]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:24.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632336434653163393065633565613937366262363461373637653263 Jan 21 23:39:24.843000 audit: BPF prog-id=177 op=LOAD Jan 21 23:39:24.843000 audit[4034]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:24.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632336434653163393065633565613937366262363461373637653263 Jan 21 23:39:24.843000 audit: BPF prog-id=178 op=LOAD Jan 21 23:39:24.843000 audit[4034]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:24.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632336434653163393065633565613937366262363461373637653263 Jan 21 23:39:24.843000 audit: BPF prog-id=178 op=UNLOAD Jan 21 23:39:24.843000 audit[4034]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:24.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632336434653163393065633565613937366262363461373637653263 Jan 21 23:39:24.843000 audit: BPF prog-id=177 op=UNLOAD Jan 21 23:39:24.843000 audit[4034]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:24.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632336434653163393065633565613937366262363461373637653263 Jan 21 23:39:24.844000 audit: BPF prog-id=179 op=LOAD Jan 21 23:39:24.844000 audit[4034]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:24.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632336434653163393065633565613937366262363461373637653263 Jan 21 23:39:24.869912 kubelet[3545]: E0121 23:39:24.869872 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.869912 kubelet[3545]: W0121 23:39:24.869899 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.869912 kubelet[3545]: E0121 23:39:24.869920 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.871242 kubelet[3545]: E0121 23:39:24.871220 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.871471 kubelet[3545]: W0121 23:39:24.871345 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.871471 kubelet[3545]: E0121 23:39:24.871391 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.872041 kubelet[3545]: E0121 23:39:24.871798 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.872041 kubelet[3545]: W0121 23:39:24.871886 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.872041 kubelet[3545]: E0121 23:39:24.871904 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.872551 kubelet[3545]: E0121 23:39:24.872523 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.872744 kubelet[3545]: W0121 23:39:24.872535 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.872744 kubelet[3545]: E0121 23:39:24.872637 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.873146 kubelet[3545]: E0121 23:39:24.873062 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.873146 kubelet[3545]: W0121 23:39:24.873074 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.873146 kubelet[3545]: E0121 23:39:24.873086 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.873923 kubelet[3545]: E0121 23:39:24.873845 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.873923 kubelet[3545]: W0121 23:39:24.873858 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.874245 kubelet[3545]: E0121 23:39:24.873869 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.874610 kubelet[3545]: E0121 23:39:24.874582 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.874974 kubelet[3545]: W0121 23:39:24.874916 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.874974 kubelet[3545]: E0121 23:39:24.874941 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.875425 kubelet[3545]: E0121 23:39:24.875348 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.875425 kubelet[3545]: W0121 23:39:24.875362 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.875837 kubelet[3545]: E0121 23:39:24.875374 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.878357 kubelet[3545]: E0121 23:39:24.878336 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.878933 kubelet[3545]: W0121 23:39:24.878502 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.878933 kubelet[3545]: E0121 23:39:24.878529 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.880079 kubelet[3545]: E0121 23:39:24.879207 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.880079 kubelet[3545]: W0121 23:39:24.879220 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.880079 kubelet[3545]: E0121 23:39:24.879231 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.880309 kubelet[3545]: E0121 23:39:24.880297 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.880381 kubelet[3545]: W0121 23:39:24.880370 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.880436 kubelet[3545]: E0121 23:39:24.880426 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.880698 kubelet[3545]: E0121 23:39:24.880606 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.880698 kubelet[3545]: W0121 23:39:24.880615 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.880698 kubelet[3545]: E0121 23:39:24.880623 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.880928 kubelet[3545]: E0121 23:39:24.880918 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.881066 kubelet[3545]: W0121 23:39:24.880975 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.881066 kubelet[3545]: E0121 23:39:24.880988 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.881300 kubelet[3545]: E0121 23:39:24.881222 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.881300 kubelet[3545]: W0121 23:39:24.881231 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.881300 kubelet[3545]: E0121 23:39:24.881239 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.881602 kubelet[3545]: E0121 23:39:24.881481 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.881602 kubelet[3545]: W0121 23:39:24.881489 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.881602 kubelet[3545]: E0121 23:39:24.881500 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.881843 kubelet[3545]: E0121 23:39:24.881827 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.882854 kubelet[3545]: W0121 23:39:24.881909 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.882854 kubelet[3545]: E0121 23:39:24.881925 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.883356 kubelet[3545]: E0121 23:39:24.883259 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.883356 kubelet[3545]: W0121 23:39:24.883278 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.883356 kubelet[3545]: E0121 23:39:24.883289 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.883653 kubelet[3545]: E0121 23:39:24.883642 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.883791 kubelet[3545]: W0121 23:39:24.883711 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.883791 kubelet[3545]: E0121 23:39:24.883725 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.884110 kubelet[3545]: E0121 23:39:24.884098 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.884336 kubelet[3545]: W0121 23:39:24.884246 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.884336 kubelet[3545]: E0121 23:39:24.884263 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.884621 kubelet[3545]: E0121 23:39:24.884568 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.884621 kubelet[3545]: W0121 23:39:24.884577 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.884889 kubelet[3545]: E0121 23:39:24.884797 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.885863 kubelet[3545]: E0121 23:39:24.885833 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.886013 kubelet[3545]: W0121 23:39:24.885951 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.886013 kubelet[3545]: E0121 23:39:24.885968 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.886245 kubelet[3545]: I0121 23:39:24.886142 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/03532856-4a1c-4971-af49-0f675b6cbf1f-varrun\") pod \"csi-node-driver-vmwkp\" (UID: \"03532856-4a1c-4971-af49-0f675b6cbf1f\") " pod="calico-system/csi-node-driver-vmwkp" Jan 21 23:39:24.886878 kubelet[3545]: E0121 23:39:24.886858 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.886878 kubelet[3545]: W0121 23:39:24.886871 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.886878 kubelet[3545]: E0121 23:39:24.886882 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.887080 kubelet[3545]: E0121 23:39:24.887030 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.887080 kubelet[3545]: W0121 23:39:24.887037 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.887375 kubelet[3545]: E0121 23:39:24.887347 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.887800 kubelet[3545]: E0121 23:39:24.887782 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.887800 kubelet[3545]: W0121 23:39:24.887796 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.887905 kubelet[3545]: E0121 23:39:24.887806 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.887905 kubelet[3545]: I0121 23:39:24.887826 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/03532856-4a1c-4971-af49-0f675b6cbf1f-socket-dir\") pod \"csi-node-driver-vmwkp\" (UID: \"03532856-4a1c-4971-af49-0f675b6cbf1f\") " pod="calico-system/csi-node-driver-vmwkp" Jan 21 23:39:24.888178 kubelet[3545]: E0121 23:39:24.888159 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.888261 kubelet[3545]: W0121 23:39:24.888184 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.888261 kubelet[3545]: E0121 23:39:24.888195 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.888261 kubelet[3545]: I0121 23:39:24.888217 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/03532856-4a1c-4971-af49-0f675b6cbf1f-registration-dir\") pod \"csi-node-driver-vmwkp\" (UID: \"03532856-4a1c-4971-af49-0f675b6cbf1f\") " pod="calico-system/csi-node-driver-vmwkp" Jan 21 23:39:24.888376 kubelet[3545]: E0121 23:39:24.888363 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.888376 kubelet[3545]: W0121 23:39:24.888371 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.888520 kubelet[3545]: E0121 23:39:24.888380 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.888520 kubelet[3545]: I0121 23:39:24.888397 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc6dl\" (UniqueName: \"kubernetes.io/projected/03532856-4a1c-4971-af49-0f675b6cbf1f-kube-api-access-sc6dl\") pod \"csi-node-driver-vmwkp\" (UID: \"03532856-4a1c-4971-af49-0f675b6cbf1f\") " pod="calico-system/csi-node-driver-vmwkp" Jan 21 23:39:24.889780 kubelet[3545]: E0121 23:39:24.889290 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.889846 containerd[2113]: time="2026-01-21T23:39:24.889652583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6676c5b97d-ljbgj,Uid:74f553f4-b90e-4e87-9706-21ae4498bf1a,Namespace:calico-system,Attempt:0,} returns sandbox id \"623d4e1c90ec5ea976bb64a767e2cfa2b535232f422540db39514a64db666e30\"" Jan 21 23:39:24.890081 kubelet[3545]: W0121 23:39:24.889304 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.890081 kubelet[3545]: E0121 23:39:24.889994 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.890361 kubelet[3545]: E0121 23:39:24.890321 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.890361 kubelet[3545]: W0121 23:39:24.890333 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.890361 kubelet[3545]: E0121 23:39:24.890342 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.891169 kubelet[3545]: E0121 23:39:24.891153 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.891259 kubelet[3545]: W0121 23:39:24.891248 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.891345 kubelet[3545]: E0121 23:39:24.891328 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.892276 kubelet[3545]: E0121 23:39:24.892259 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.892494 kubelet[3545]: W0121 23:39:24.892403 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.892494 kubelet[3545]: E0121 23:39:24.892422 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.892952 kubelet[3545]: E0121 23:39:24.892936 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.893106 kubelet[3545]: W0121 23:39:24.893017 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.893106 kubelet[3545]: E0121 23:39:24.893032 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.894514 containerd[2113]: time="2026-01-21T23:39:24.893816535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 21 23:39:24.894568 kubelet[3545]: E0121 23:39:24.894223 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.894568 kubelet[3545]: W0121 23:39:24.894234 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.894568 kubelet[3545]: E0121 23:39:24.894246 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.894721 kubelet[3545]: I0121 23:39:24.894689 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03532856-4a1c-4971-af49-0f675b6cbf1f-kubelet-dir\") pod \"csi-node-driver-vmwkp\" (UID: \"03532856-4a1c-4971-af49-0f675b6cbf1f\") " pod="calico-system/csi-node-driver-vmwkp" Jan 21 23:39:24.894816 kubelet[3545]: E0121 23:39:24.894797 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.895330 kubelet[3545]: W0121 23:39:24.895312 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.895425 kubelet[3545]: E0121 23:39:24.895413 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.895710 kubelet[3545]: E0121 23:39:24.895695 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.895818 kubelet[3545]: W0121 23:39:24.895770 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.895818 kubelet[3545]: E0121 23:39:24.895785 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.895973 kubelet[3545]: E0121 23:39:24.895965 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.896071 kubelet[3545]: W0121 23:39:24.896028 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.896071 kubelet[3545]: E0121 23:39:24.896053 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.914670 containerd[2113]: time="2026-01-21T23:39:24.914628237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-78df5,Uid:114e7040-ab80-4008-bfcf-0117849bf2c5,Namespace:calico-system,Attempt:0,}" Jan 21 23:39:24.956758 containerd[2113]: time="2026-01-21T23:39:24.956428471Z" level=info msg="connecting to shim 4f7333066431df8a8604c3627ef28985da3e0af7ca8276e28ce3164fefd361a3" address="unix:///run/containerd/s/28ebc5892c3e712a3145516d059fd1e9e743ad98781a45fe6b70f80feb33d4af" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:39:24.975262 systemd[1]: Started cri-containerd-4f7333066431df8a8604c3627ef28985da3e0af7ca8276e28ce3164fefd361a3.scope - libcontainer container 4f7333066431df8a8604c3627ef28985da3e0af7ca8276e28ce3164fefd361a3. Jan 21 23:39:24.982000 audit: BPF prog-id=180 op=LOAD Jan 21 23:39:24.982000 audit: BPF prog-id=181 op=LOAD Jan 21 23:39:24.982000 audit[4128]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4116 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:24.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466373333333036363433316466386138363034633336323765663238 Jan 21 23:39:24.982000 audit: BPF prog-id=181 op=UNLOAD Jan 21 23:39:24.982000 audit[4128]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4116 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:24.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466373333333036363433316466386138363034633336323765663238 Jan 21 23:39:24.983000 audit: BPF prog-id=182 op=LOAD Jan 21 23:39:24.983000 audit[4128]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4116 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:24.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466373333333036363433316466386138363034633336323765663238 Jan 21 23:39:24.983000 audit: BPF prog-id=183 op=LOAD Jan 21 23:39:24.983000 audit[4128]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4116 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:24.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466373333333036363433316466386138363034633336323765663238 Jan 21 23:39:24.983000 audit: BPF prog-id=183 op=UNLOAD Jan 21 23:39:24.983000 audit[4128]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4116 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:24.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466373333333036363433316466386138363034633336323765663238 Jan 21 23:39:24.983000 audit: BPF prog-id=182 op=UNLOAD Jan 21 23:39:24.983000 audit[4128]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4116 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:24.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466373333333036363433316466386138363034633336323765663238 Jan 21 23:39:24.983000 audit: BPF prog-id=184 op=LOAD Jan 21 23:39:24.983000 audit[4128]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4116 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:24.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466373333333036363433316466386138363034633336323765663238 Jan 21 23:39:24.996980 kubelet[3545]: E0121 23:39:24.996917 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.997682 kubelet[3545]: W0121 23:39:24.997056 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.997682 kubelet[3545]: E0121 23:39:24.997086 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.997682 kubelet[3545]: E0121 23:39:24.997466 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.997682 kubelet[3545]: W0121 23:39:24.997478 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.997682 kubelet[3545]: E0121 23:39:24.997498 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.997822 kubelet[3545]: E0121 23:39:24.997723 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.997822 kubelet[3545]: W0121 23:39:24.997732 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.997822 kubelet[3545]: E0121 23:39:24.997741 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.998269 kubelet[3545]: E0121 23:39:24.997919 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.998269 kubelet[3545]: W0121 23:39:24.997931 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.998269 kubelet[3545]: E0121 23:39:24.997938 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.998528 kubelet[3545]: E0121 23:39:24.998402 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.998528 kubelet[3545]: W0121 23:39:24.998416 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.998613 kubelet[3545]: E0121 23:39:24.998576 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.999022 kubelet[3545]: E0121 23:39:24.999008 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.999185 kubelet[3545]: W0121 23:39:24.999076 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.999185 kubelet[3545]: E0121 23:39:24.999088 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:24.999648 kubelet[3545]: E0121 23:39:24.999610 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:24.999648 kubelet[3545]: W0121 23:39:24.999623 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:24.999648 kubelet[3545]: E0121 23:39:24.999635 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:25.000135 kubelet[3545]: E0121 23:39:25.000116 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:25.000562 kubelet[3545]: W0121 23:39:25.000276 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:25.000562 kubelet[3545]: E0121 23:39:25.000299 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:25.001393 kubelet[3545]: E0121 23:39:25.001181 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:25.001393 kubelet[3545]: W0121 23:39:25.001198 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:25.001393 kubelet[3545]: E0121 23:39:25.001210 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:25.001662 containerd[2113]: time="2026-01-21T23:39:25.001588278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-78df5,Uid:114e7040-ab80-4008-bfcf-0117849bf2c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"4f7333066431df8a8604c3627ef28985da3e0af7ca8276e28ce3164fefd361a3\"" Jan 21 23:39:25.001899 kubelet[3545]: E0121 23:39:25.001780 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:25.003104 kubelet[3545]: W0121 23:39:25.001957 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:25.003104 kubelet[3545]: E0121 23:39:25.001996 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:25.003473 kubelet[3545]: E0121 23:39:25.003457 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:25.003554 kubelet[3545]: W0121 23:39:25.003542 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:25.003631 kubelet[3545]: E0121 23:39:25.003619 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:25.003954 kubelet[3545]: E0121 23:39:25.003872 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:25.003954 kubelet[3545]: W0121 23:39:25.003883 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:25.003954 kubelet[3545]: E0121 23:39:25.003892 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:25.004197 kubelet[3545]: E0121 23:39:25.004164 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:25.004197 kubelet[3545]: W0121 23:39:25.004176 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:25.004368 kubelet[3545]: E0121 23:39:25.004276 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:25.004503 kubelet[3545]: E0121 23:39:25.004464 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:25.004503 kubelet[3545]: W0121 23:39:25.004474 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:25.004503 kubelet[3545]: E0121 23:39:25.004483 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:25.005319 kubelet[3545]: E0121 23:39:25.005294 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:25.005319 kubelet[3545]: W0121 23:39:25.005313 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:25.005404 kubelet[3545]: E0121 23:39:25.005328 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:25.005533 kubelet[3545]: E0121 23:39:25.005499 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:25.005533 kubelet[3545]: W0121 23:39:25.005525 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:25.005533 kubelet[3545]: E0121 23:39:25.005535 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:25.005690 kubelet[3545]: E0121 23:39:25.005668 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:25.005690 kubelet[3545]: W0121 23:39:25.005678 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:25.005690 kubelet[3545]: E0121 23:39:25.005686 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:25.005944 kubelet[3545]: E0121 23:39:25.005922 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:25.005944 kubelet[3545]: W0121 23:39:25.005938 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:25.006007 kubelet[3545]: E0121 23:39:25.005952 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:25.006231 kubelet[3545]: E0121 23:39:25.006214 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:25.006231 kubelet[3545]: W0121 23:39:25.006227 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:25.006294 kubelet[3545]: E0121 23:39:25.006237 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:25.006443 kubelet[3545]: E0121 23:39:25.006427 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:25.006443 kubelet[3545]: W0121 23:39:25.006438 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:25.006491 kubelet[3545]: E0121 23:39:25.006447 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:25.006691 kubelet[3545]: E0121 23:39:25.006673 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:25.006691 kubelet[3545]: W0121 23:39:25.006685 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:25.006746 kubelet[3545]: E0121 23:39:25.006694 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:25.008247 kubelet[3545]: E0121 23:39:25.008037 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:25.008247 kubelet[3545]: W0121 23:39:25.008244 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:25.008332 kubelet[3545]: E0121 23:39:25.008261 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:25.009259 kubelet[3545]: E0121 23:39:25.009114 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:25.009259 kubelet[3545]: W0121 23:39:25.009131 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:25.009259 kubelet[3545]: E0121 23:39:25.009145 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:25.011081 kubelet[3545]: E0121 23:39:25.009529 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:25.011165 kubelet[3545]: W0121 23:39:25.011149 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:25.011230 kubelet[3545]: E0121 23:39:25.011221 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:25.011489 kubelet[3545]: E0121 23:39:25.011477 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:25.011572 kubelet[3545]: W0121 23:39:25.011539 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:25.011572 kubelet[3545]: E0121 23:39:25.011553 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:25.015263 kubelet[3545]: E0121 23:39:25.015240 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:25.015263 kubelet[3545]: W0121 23:39:25.015257 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:25.015331 kubelet[3545]: E0121 23:39:25.015272 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.267964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2481235634.mount: Deactivated successfully. Jan 21 23:39:26.691001 kubelet[3545]: E0121 23:39:26.690008 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmwkp" podUID="03532856-4a1c-4971-af49-0f675b6cbf1f" Jan 21 23:39:26.697319 containerd[2113]: time="2026-01-21T23:39:26.696746522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:39:26.699500 containerd[2113]: time="2026-01-21T23:39:26.699442647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Jan 21 23:39:26.702259 containerd[2113]: time="2026-01-21T23:39:26.702201215Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:39:26.707456 containerd[2113]: time="2026-01-21T23:39:26.706228154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:39:26.707456 containerd[2113]: time="2026-01-21T23:39:26.706674801Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.812822601s" Jan 21 23:39:26.707456 containerd[2113]: time="2026-01-21T23:39:26.706701266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 21 23:39:26.708887 containerd[2113]: time="2026-01-21T23:39:26.708857668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 21 23:39:26.727120 containerd[2113]: time="2026-01-21T23:39:26.727082113Z" level=info msg="CreateContainer within sandbox \"623d4e1c90ec5ea976bb64a767e2cfa2b535232f422540db39514a64db666e30\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 21 23:39:26.749075 containerd[2113]: time="2026-01-21T23:39:26.747426159Z" level=info msg="Container 5a3743e6bf430eaa2ceae37949455390e12a908e9c7366605f0e0d27547e102a: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:39:26.763075 containerd[2113]: time="2026-01-21T23:39:26.763015889Z" level=info msg="CreateContainer within sandbox \"623d4e1c90ec5ea976bb64a767e2cfa2b535232f422540db39514a64db666e30\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5a3743e6bf430eaa2ceae37949455390e12a908e9c7366605f0e0d27547e102a\"" Jan 21 23:39:26.764148 containerd[2113]: time="2026-01-21T23:39:26.763989723Z" level=info msg="StartContainer for \"5a3743e6bf430eaa2ceae37949455390e12a908e9c7366605f0e0d27547e102a\"" Jan 21 23:39:26.765923 containerd[2113]: time="2026-01-21T23:39:26.765883236Z" level=info msg="connecting to shim 5a3743e6bf430eaa2ceae37949455390e12a908e9c7366605f0e0d27547e102a" address="unix:///run/containerd/s/b39456a96d82d5d525ac59e9521c36ae55e2c9986b8fa9ad9fbbb6d5cda9aed7" protocol=ttrpc version=3 Jan 21 23:39:26.789265 systemd[1]: Started cri-containerd-5a3743e6bf430eaa2ceae37949455390e12a908e9c7366605f0e0d27547e102a.scope - libcontainer container 5a3743e6bf430eaa2ceae37949455390e12a908e9c7366605f0e0d27547e102a. Jan 21 23:39:26.809000 audit: BPF prog-id=185 op=LOAD Jan 21 23:39:26.809000 audit: BPF prog-id=186 op=LOAD Jan 21 23:39:26.809000 audit[4190]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4023 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:26.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561333734336536626634333065616132636561653337393439343535 Jan 21 23:39:26.809000 audit: BPF prog-id=186 op=UNLOAD Jan 21 23:39:26.809000 audit[4190]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4023 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:26.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561333734336536626634333065616132636561653337393439343535 Jan 21 23:39:26.809000 audit: BPF prog-id=187 op=LOAD Jan 21 23:39:26.809000 audit[4190]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4023 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:26.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561333734336536626634333065616132636561653337393439343535 Jan 21 23:39:26.809000 audit: BPF prog-id=188 op=LOAD Jan 21 23:39:26.809000 audit[4190]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4023 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:26.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561333734336536626634333065616132636561653337393439343535 Jan 21 23:39:26.809000 audit: BPF prog-id=188 op=UNLOAD Jan 21 23:39:26.809000 audit[4190]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4023 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:26.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561333734336536626634333065616132636561653337393439343535 Jan 21 23:39:26.810000 audit: BPF prog-id=187 op=UNLOAD Jan 21 23:39:26.810000 audit[4190]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4023 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:26.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561333734336536626634333065616132636561653337393439343535 Jan 21 23:39:26.810000 audit: BPF prog-id=189 op=LOAD Jan 21 23:39:26.810000 audit[4190]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4023 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:26.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561333734336536626634333065616132636561653337393439343535 Jan 21 23:39:26.840521 containerd[2113]: time="2026-01-21T23:39:26.840481538Z" level=info msg="StartContainer for \"5a3743e6bf430eaa2ceae37949455390e12a908e9c7366605f0e0d27547e102a\" returns successfully" Jan 21 23:39:26.872882 kubelet[3545]: I0121 23:39:26.872738 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6676c5b97d-ljbgj" podStartSLOduration=1.056872497 podStartE2EDuration="2.872723259s" podCreationTimestamp="2026-01-21 23:39:24 +0000 UTC" firstStartedPulling="2026-01-21 23:39:24.892642958 +0000 UTC m=+22.342502840" lastFinishedPulling="2026-01-21 23:39:26.708493696 +0000 UTC m=+24.158353602" observedRunningTime="2026-01-21 23:39:26.872567341 +0000 UTC m=+24.322427239" watchObservedRunningTime="2026-01-21 23:39:26.872723259 +0000 UTC m=+24.322583149" Jan 21 23:39:26.902402 kubelet[3545]: E0121 23:39:26.902360 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.902402 kubelet[3545]: W0121 23:39:26.902385 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.902402 kubelet[3545]: E0121 23:39:26.902405 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.902738 kubelet[3545]: E0121 23:39:26.902600 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.902738 kubelet[3545]: W0121 23:39:26.902609 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.902738 kubelet[3545]: E0121 23:39:26.902617 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.902824 kubelet[3545]: E0121 23:39:26.902754 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.902824 kubelet[3545]: W0121 23:39:26.902763 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.902824 kubelet[3545]: E0121 23:39:26.902769 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.902900 kubelet[3545]: E0121 23:39:26.902888 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.902900 kubelet[3545]: W0121 23:39:26.902895 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.902928 kubelet[3545]: E0121 23:39:26.902901 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.903031 kubelet[3545]: E0121 23:39:26.903020 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.903073 kubelet[3545]: W0121 23:39:26.903028 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.903073 kubelet[3545]: E0121 23:39:26.903040 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.903199 kubelet[3545]: E0121 23:39:26.903168 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.903199 kubelet[3545]: W0121 23:39:26.903176 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.903199 kubelet[3545]: E0121 23:39:26.903193 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.903294 kubelet[3545]: E0121 23:39:26.903291 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.903350 kubelet[3545]: W0121 23:39:26.903298 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.903350 kubelet[3545]: E0121 23:39:26.903304 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.903425 kubelet[3545]: E0121 23:39:26.903406 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.903425 kubelet[3545]: W0121 23:39:26.903426 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.903468 kubelet[3545]: E0121 23:39:26.903433 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.903567 kubelet[3545]: E0121 23:39:26.903553 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.903567 kubelet[3545]: W0121 23:39:26.903562 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.903620 kubelet[3545]: E0121 23:39:26.903581 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.903689 kubelet[3545]: E0121 23:39:26.903677 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.903689 kubelet[3545]: W0121 23:39:26.903684 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.903724 kubelet[3545]: E0121 23:39:26.903690 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.903794 kubelet[3545]: E0121 23:39:26.903782 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.903794 kubelet[3545]: W0121 23:39:26.903790 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.903830 kubelet[3545]: E0121 23:39:26.903795 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.903919 kubelet[3545]: E0121 23:39:26.903907 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.903919 kubelet[3545]: W0121 23:39:26.903914 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.903919 kubelet[3545]: E0121 23:39:26.903920 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.904034 kubelet[3545]: E0121 23:39:26.904022 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.904081 kubelet[3545]: W0121 23:39:26.904041 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.904081 kubelet[3545]: E0121 23:39:26.904056 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.904170 kubelet[3545]: E0121 23:39:26.904158 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.904170 kubelet[3545]: W0121 23:39:26.904166 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.904205 kubelet[3545]: E0121 23:39:26.904172 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.904293 kubelet[3545]: E0121 23:39:26.904282 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.904293 kubelet[3545]: W0121 23:39:26.904289 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.904335 kubelet[3545]: E0121 23:39:26.904294 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.912946 kubelet[3545]: E0121 23:39:26.912913 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.912946 kubelet[3545]: W0121 23:39:26.912936 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.912946 kubelet[3545]: E0121 23:39:26.912953 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.913660 kubelet[3545]: E0121 23:39:26.913635 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.913660 kubelet[3545]: W0121 23:39:26.913651 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.913749 kubelet[3545]: E0121 23:39:26.913662 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.914753 kubelet[3545]: E0121 23:39:26.914728 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.914753 kubelet[3545]: W0121 23:39:26.914744 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.914753 kubelet[3545]: E0121 23:39:26.914755 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.915064 kubelet[3545]: E0121 23:39:26.915029 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.915241 kubelet[3545]: W0121 23:39:26.915137 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.915241 kubelet[3545]: E0121 23:39:26.915155 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.915423 kubelet[3545]: E0121 23:39:26.915412 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.917101 kubelet[3545]: W0121 23:39:26.917080 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.917280 kubelet[3545]: E0121 23:39:26.917178 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.917423 kubelet[3545]: E0121 23:39:26.917413 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.917495 kubelet[3545]: W0121 23:39:26.917484 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.917622 kubelet[3545]: E0121 23:39:26.917532 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.917759 kubelet[3545]: E0121 23:39:26.917749 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.917816 kubelet[3545]: W0121 23:39:26.917807 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.917864 kubelet[3545]: E0121 23:39:26.917853 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.918029 kubelet[3545]: E0121 23:39:26.918019 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.918221 kubelet[3545]: W0121 23:39:26.918110 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.918221 kubelet[3545]: E0121 23:39:26.918125 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.918362 kubelet[3545]: E0121 23:39:26.918353 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.918693 kubelet[3545]: W0121 23:39:26.918407 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.918693 kubelet[3545]: E0121 23:39:26.918422 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.919070 kubelet[3545]: E0121 23:39:26.919021 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.919070 kubelet[3545]: W0121 23:39:26.919036 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.919070 kubelet[3545]: E0121 23:39:26.919057 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.919273 kubelet[3545]: E0121 23:39:26.919260 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.919273 kubelet[3545]: W0121 23:39:26.919270 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.919323 kubelet[3545]: E0121 23:39:26.919279 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.920759 kubelet[3545]: E0121 23:39:26.920743 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.920759 kubelet[3545]: W0121 23:39:26.920755 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.920905 kubelet[3545]: E0121 23:39:26.920764 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.920983 kubelet[3545]: E0121 23:39:26.920970 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.920983 kubelet[3545]: W0121 23:39:26.920981 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.921030 kubelet[3545]: E0121 23:39:26.920989 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.921193 kubelet[3545]: E0121 23:39:26.921178 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.921193 kubelet[3545]: W0121 23:39:26.921189 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.921258 kubelet[3545]: E0121 23:39:26.921197 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.921475 kubelet[3545]: E0121 23:39:26.921461 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.921475 kubelet[3545]: W0121 23:39:26.921471 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.921530 kubelet[3545]: E0121 23:39:26.921479 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.921627 kubelet[3545]: E0121 23:39:26.921616 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.921627 kubelet[3545]: W0121 23:39:26.921626 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.921686 kubelet[3545]: E0121 23:39:26.921634 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.921795 kubelet[3545]: E0121 23:39:26.921785 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.921795 kubelet[3545]: W0121 23:39:26.921792 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.921842 kubelet[3545]: E0121 23:39:26.921799 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:26.923086 kubelet[3545]: E0121 23:39:26.923037 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:26.923086 kubelet[3545]: W0121 23:39:26.923066 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:26.923086 kubelet[3545]: E0121 23:39:26.923079 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.858072 kubelet[3545]: I0121 23:39:27.857893 3545 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 23:39:27.911079 kubelet[3545]: E0121 23:39:27.910969 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.911079 kubelet[3545]: W0121 23:39:27.910995 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.911079 kubelet[3545]: E0121 23:39:27.911015 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.911272 kubelet[3545]: E0121 23:39:27.911171 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.911272 kubelet[3545]: W0121 23:39:27.911178 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.911272 kubelet[3545]: E0121 23:39:27.911185 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.911319 kubelet[3545]: E0121 23:39:27.911282 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.911319 kubelet[3545]: W0121 23:39:27.911287 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.911319 kubelet[3545]: E0121 23:39:27.911293 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.911398 kubelet[3545]: E0121 23:39:27.911379 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.911398 kubelet[3545]: W0121 23:39:27.911390 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.911398 kubelet[3545]: E0121 23:39:27.911395 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.911540 kubelet[3545]: E0121 23:39:27.911526 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.911540 kubelet[3545]: W0121 23:39:27.911536 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.911583 kubelet[3545]: E0121 23:39:27.911542 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.911658 kubelet[3545]: E0121 23:39:27.911645 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.911658 kubelet[3545]: W0121 23:39:27.911654 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.911686 kubelet[3545]: E0121 23:39:27.911662 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.911756 kubelet[3545]: E0121 23:39:27.911745 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.911756 kubelet[3545]: W0121 23:39:27.911753 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.911785 kubelet[3545]: E0121 23:39:27.911758 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.911847 kubelet[3545]: E0121 23:39:27.911836 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.911847 kubelet[3545]: W0121 23:39:27.911844 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.911876 kubelet[3545]: E0121 23:39:27.911849 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.911991 kubelet[3545]: E0121 23:39:27.911978 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.911991 kubelet[3545]: W0121 23:39:27.911987 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.912026 kubelet[3545]: E0121 23:39:27.911993 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.912120 kubelet[3545]: E0121 23:39:27.912107 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.912120 kubelet[3545]: W0121 23:39:27.912116 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.912148 kubelet[3545]: E0121 23:39:27.912121 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.912213 kubelet[3545]: E0121 23:39:27.912202 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.912213 kubelet[3545]: W0121 23:39:27.912210 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.912247 kubelet[3545]: E0121 23:39:27.912216 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.912319 kubelet[3545]: E0121 23:39:27.912299 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.912319 kubelet[3545]: W0121 23:39:27.912306 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.912319 kubelet[3545]: E0121 23:39:27.912311 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.912412 kubelet[3545]: E0121 23:39:27.912399 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.912412 kubelet[3545]: W0121 23:39:27.912407 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.912442 kubelet[3545]: E0121 23:39:27.912413 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.912507 kubelet[3545]: E0121 23:39:27.912497 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.912507 kubelet[3545]: W0121 23:39:27.912505 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.912543 kubelet[3545]: E0121 23:39:27.912510 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.912624 kubelet[3545]: E0121 23:39:27.912590 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.912624 kubelet[3545]: W0121 23:39:27.912597 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.912624 kubelet[3545]: E0121 23:39:27.912602 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.921076 kubelet[3545]: E0121 23:39:27.921034 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.921076 kubelet[3545]: W0121 23:39:27.921067 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.921076 kubelet[3545]: E0121 23:39:27.921080 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.921266 kubelet[3545]: E0121 23:39:27.921234 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.921266 kubelet[3545]: W0121 23:39:27.921246 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.921266 kubelet[3545]: E0121 23:39:27.921256 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.921404 kubelet[3545]: E0121 23:39:27.921380 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.921404 kubelet[3545]: W0121 23:39:27.921385 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.921404 kubelet[3545]: E0121 23:39:27.921393 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.921611 kubelet[3545]: E0121 23:39:27.921594 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.921611 kubelet[3545]: W0121 23:39:27.921605 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.921675 kubelet[3545]: E0121 23:39:27.921613 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.921754 kubelet[3545]: E0121 23:39:27.921743 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.921754 kubelet[3545]: W0121 23:39:27.921751 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.921795 kubelet[3545]: E0121 23:39:27.921757 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.921874 kubelet[3545]: E0121 23:39:27.921859 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.921874 kubelet[3545]: W0121 23:39:27.921868 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.921923 kubelet[3545]: E0121 23:39:27.921875 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.922003 kubelet[3545]: E0121 23:39:27.921991 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.922003 kubelet[3545]: W0121 23:39:27.921999 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.922038 kubelet[3545]: E0121 23:39:27.922006 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.922200 kubelet[3545]: E0121 23:39:27.922185 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.922200 kubelet[3545]: W0121 23:39:27.922196 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.922239 kubelet[3545]: E0121 23:39:27.922202 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.922424 kubelet[3545]: E0121 23:39:27.922410 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.922424 kubelet[3545]: W0121 23:39:27.922420 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.922462 kubelet[3545]: E0121 23:39:27.922426 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.922534 kubelet[3545]: E0121 23:39:27.922521 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.922534 kubelet[3545]: W0121 23:39:27.922530 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.922571 kubelet[3545]: E0121 23:39:27.922535 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.922666 kubelet[3545]: E0121 23:39:27.922653 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.922666 kubelet[3545]: W0121 23:39:27.922661 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.922666 kubelet[3545]: E0121 23:39:27.922666 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.922767 kubelet[3545]: E0121 23:39:27.922752 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.922767 kubelet[3545]: W0121 23:39:27.922760 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.922767 kubelet[3545]: E0121 23:39:27.922765 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.922961 kubelet[3545]: E0121 23:39:27.922947 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.922961 kubelet[3545]: W0121 23:39:27.922957 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.922961 kubelet[3545]: E0121 23:39:27.922962 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.923521 kubelet[3545]: E0121 23:39:27.923295 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.923521 kubelet[3545]: W0121 23:39:27.923366 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.923521 kubelet[3545]: E0121 23:39:27.923378 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.924076 kubelet[3545]: E0121 23:39:27.923780 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.924076 kubelet[3545]: W0121 23:39:27.923793 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.924076 kubelet[3545]: E0121 23:39:27.923802 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.924589 kubelet[3545]: E0121 23:39:27.924215 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.924589 kubelet[3545]: W0121 23:39:27.924227 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.924589 kubelet[3545]: E0121 23:39:27.924239 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.924589 kubelet[3545]: E0121 23:39:27.924433 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.924589 kubelet[3545]: W0121 23:39:27.924442 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.924589 kubelet[3545]: E0121 23:39:27.924451 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:27.924589 kubelet[3545]: E0121 23:39:27.924577 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:39:27.924589 kubelet[3545]: W0121 23:39:27.924583 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:39:27.924589 kubelet[3545]: E0121 23:39:27.924591 3545 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:39:28.105919 containerd[2113]: time="2026-01-21T23:39:28.105860353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:39:28.109494 containerd[2113]: time="2026-01-21T23:39:28.109383898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 21 23:39:28.112926 containerd[2113]: time="2026-01-21T23:39:28.112889555Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:39:28.116717 containerd[2113]: time="2026-01-21T23:39:28.116679398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:39:28.117381 containerd[2113]: time="2026-01-21T23:39:28.117351613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.408363957s" Jan 21 23:39:28.117418 containerd[2113]: time="2026-01-21T23:39:28.117386647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 21 23:39:28.124323 containerd[2113]: time="2026-01-21T23:39:28.124287837Z" level=info msg="CreateContainer within sandbox \"4f7333066431df8a8604c3627ef28985da3e0af7ca8276e28ce3164fefd361a3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 21 23:39:28.146290 containerd[2113]: time="2026-01-21T23:39:28.145494104Z" level=info msg="Container d2f0e7eb25a372ccc8a3450f008f8b213b1aff6011244f82f0cfbeadb1501aec: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:39:28.147234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount376371000.mount: Deactivated successfully. Jan 21 23:39:28.162392 containerd[2113]: time="2026-01-21T23:39:28.162349798Z" level=info msg="CreateContainer within sandbox \"4f7333066431df8a8604c3627ef28985da3e0af7ca8276e28ce3164fefd361a3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d2f0e7eb25a372ccc8a3450f008f8b213b1aff6011244f82f0cfbeadb1501aec\"" Jan 21 23:39:28.163363 containerd[2113]: time="2026-01-21T23:39:28.163246637Z" level=info msg="StartContainer for \"d2f0e7eb25a372ccc8a3450f008f8b213b1aff6011244f82f0cfbeadb1501aec\"" Jan 21 23:39:28.164747 containerd[2113]: time="2026-01-21T23:39:28.164713184Z" level=info msg="connecting to shim d2f0e7eb25a372ccc8a3450f008f8b213b1aff6011244f82f0cfbeadb1501aec" address="unix:///run/containerd/s/28ebc5892c3e712a3145516d059fd1e9e743ad98781a45fe6b70f80feb33d4af" protocol=ttrpc version=3 Jan 21 23:39:28.184237 systemd[1]: Started cri-containerd-d2f0e7eb25a372ccc8a3450f008f8b213b1aff6011244f82f0cfbeadb1501aec.scope - libcontainer container d2f0e7eb25a372ccc8a3450f008f8b213b1aff6011244f82f0cfbeadb1501aec. Jan 21 23:39:28.221000 audit: BPF prog-id=190 op=LOAD Jan 21 23:39:28.221000 audit[4301]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4116 pid=4301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:28.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432663065376562323561333732636363386133343530663030386638 Jan 21 23:39:28.221000 audit: BPF prog-id=191 op=LOAD Jan 21 23:39:28.221000 audit[4301]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4116 pid=4301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:28.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432663065376562323561333732636363386133343530663030386638 Jan 21 23:39:28.221000 audit: BPF prog-id=191 op=UNLOAD Jan 21 23:39:28.221000 audit[4301]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4116 pid=4301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:28.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432663065376562323561333732636363386133343530663030386638 Jan 21 23:39:28.221000 audit: BPF prog-id=190 op=UNLOAD Jan 21 23:39:28.221000 audit[4301]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4116 pid=4301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:28.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432663065376562323561333732636363386133343530663030386638 Jan 21 23:39:28.221000 audit: BPF prog-id=192 op=LOAD Jan 21 23:39:28.221000 audit[4301]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4116 pid=4301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:28.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432663065376562323561333732636363386133343530663030386638 Jan 21 23:39:28.244378 containerd[2113]: time="2026-01-21T23:39:28.244304786Z" level=info msg="StartContainer for \"d2f0e7eb25a372ccc8a3450f008f8b213b1aff6011244f82f0cfbeadb1501aec\" returns successfully" Jan 21 23:39:28.250514 systemd[1]: cri-containerd-d2f0e7eb25a372ccc8a3450f008f8b213b1aff6011244f82f0cfbeadb1501aec.scope: Deactivated successfully. Jan 21 23:39:28.251000 audit: BPF prog-id=192 op=UNLOAD Jan 21 23:39:28.254714 containerd[2113]: time="2026-01-21T23:39:28.254664927Z" level=info msg="received container exit event container_id:\"d2f0e7eb25a372ccc8a3450f008f8b213b1aff6011244f82f0cfbeadb1501aec\" id:\"d2f0e7eb25a372ccc8a3450f008f8b213b1aff6011244f82f0cfbeadb1501aec\" pid:4313 exited_at:{seconds:1769038768 nanos:253374723}" Jan 21 23:39:28.273423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2f0e7eb25a372ccc8a3450f008f8b213b1aff6011244f82f0cfbeadb1501aec-rootfs.mount: Deactivated successfully. Jan 21 23:39:28.690303 kubelet[3545]: E0121 23:39:28.689522 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmwkp" podUID="03532856-4a1c-4971-af49-0f675b6cbf1f" Jan 21 23:39:29.113535 waagent[2323]: 2026-01-21T23:39:29.113480Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 21 23:39:29.123277 waagent[2323]: 2026-01-21T23:39:29.123237Z INFO ExtHandler Jan 21 23:39:29.123373 waagent[2323]: 2026-01-21T23:39:29.123348Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: a8054d39-2a7c-4610-9562-c8f041d848d6 eTag: 5318716985255562992 source: Fabric] Jan 21 23:39:29.123679 waagent[2323]: 2026-01-21T23:39:29.123648Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 21 23:39:29.124190 waagent[2323]: 2026-01-21T23:39:29.124154Z INFO ExtHandler Jan 21 23:39:29.124235 waagent[2323]: 2026-01-21T23:39:29.124217Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 21 23:39:29.174521 waagent[2323]: 2026-01-21T23:39:29.174473Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 21 23:39:29.227444 waagent[2323]: 2026-01-21T23:39:29.227365Z INFO ExtHandler Downloaded certificate {'thumbprint': '18D89E640E14AB67F185A66AC92AA151119E1C18', 'hasPrivateKey': True} Jan 21 23:39:29.227887 waagent[2323]: 2026-01-21T23:39:29.227849Z INFO ExtHandler Fetch goal state completed Jan 21 23:39:29.228235 waagent[2323]: 2026-01-21T23:39:29.228204Z INFO ExtHandler ExtHandler Jan 21 23:39:29.228313 waagent[2323]: 2026-01-21T23:39:29.228290Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 1579d1b8-a3b1-47bf-94db-f592e4545137 correlation 6696ebfd-bb76-4609-a952-807b2fbc55db created: 2026-01-21T23:39:22.293839Z] Jan 21 23:39:29.228625 waagent[2323]: 2026-01-21T23:39:29.228580Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 21 23:39:29.229174 waagent[2323]: 2026-01-21T23:39:29.229139Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 21 23:39:29.866958 containerd[2113]: time="2026-01-21T23:39:29.866920033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 21 23:39:30.689608 kubelet[3545]: E0121 23:39:30.689544 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmwkp" podUID="03532856-4a1c-4971-af49-0f675b6cbf1f" Jan 21 23:39:31.958572 containerd[2113]: time="2026-01-21T23:39:31.958510038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:39:31.961056 containerd[2113]: time="2026-01-21T23:39:31.960884323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 21 23:39:31.963329 containerd[2113]: time="2026-01-21T23:39:31.963302587Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:39:31.966850 containerd[2113]: time="2026-01-21T23:39:31.966803357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:39:31.967355 containerd[2113]: time="2026-01-21T23:39:31.967199244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.100244938s" Jan 21 23:39:31.967355 containerd[2113]: time="2026-01-21T23:39:31.967227677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 21 23:39:31.974289 containerd[2113]: time="2026-01-21T23:39:31.974206848Z" level=info msg="CreateContainer within sandbox \"4f7333066431df8a8604c3627ef28985da3e0af7ca8276e28ce3164fefd361a3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 21 23:39:31.991818 containerd[2113]: time="2026-01-21T23:39:31.991768787Z" level=info msg="Container 940d11380e2f6de20cca3d9080daec5e0085fb1696dd0192ae3824e95e623eb9: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:39:32.010411 containerd[2113]: time="2026-01-21T23:39:32.010368120Z" level=info msg="CreateContainer within sandbox \"4f7333066431df8a8604c3627ef28985da3e0af7ca8276e28ce3164fefd361a3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"940d11380e2f6de20cca3d9080daec5e0085fb1696dd0192ae3824e95e623eb9\"" Jan 21 23:39:32.012302 containerd[2113]: time="2026-01-21T23:39:32.012271603Z" level=info msg="StartContainer for \"940d11380e2f6de20cca3d9080daec5e0085fb1696dd0192ae3824e95e623eb9\"" Jan 21 23:39:32.014357 containerd[2113]: time="2026-01-21T23:39:32.014308635Z" level=info msg="connecting to shim 940d11380e2f6de20cca3d9080daec5e0085fb1696dd0192ae3824e95e623eb9" address="unix:///run/containerd/s/28ebc5892c3e712a3145516d059fd1e9e743ad98781a45fe6b70f80feb33d4af" protocol=ttrpc version=3 Jan 21 23:39:32.035267 systemd[1]: Started cri-containerd-940d11380e2f6de20cca3d9080daec5e0085fb1696dd0192ae3824e95e623eb9.scope - libcontainer container 940d11380e2f6de20cca3d9080daec5e0085fb1696dd0192ae3824e95e623eb9. Jan 21 23:39:32.080079 kernel: kauditd_printk_skb: 84 callbacks suppressed Jan 21 23:39:32.080223 kernel: audit: type=1334 audit(1769038772.075:579): prog-id=193 op=LOAD Jan 21 23:39:32.075000 audit: BPF prog-id=193 op=LOAD Jan 21 23:39:32.075000 audit[4367]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4116 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:32.100948 kernel: audit: type=1300 audit(1769038772.075:579): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4116 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:32.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934306431313338306532663664653230636361336439303830646165 Jan 21 23:39:32.117696 kernel: audit: type=1327 audit(1769038772.075:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934306431313338306532663664653230636361336439303830646165 Jan 21 23:39:32.075000 audit: BPF prog-id=194 op=LOAD Jan 21 23:39:32.122582 kernel: audit: type=1334 audit(1769038772.075:580): prog-id=194 op=LOAD Jan 21 23:39:32.075000 audit[4367]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=4116 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:32.138881 kernel: audit: type=1300 audit(1769038772.075:580): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=4116 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:32.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934306431313338306532663664653230636361336439303830646165 Jan 21 23:39:32.156501 kernel: audit: type=1327 audit(1769038772.075:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934306431313338306532663664653230636361336439303830646165 Jan 21 23:39:32.078000 audit: BPF prog-id=194 op=UNLOAD Jan 21 23:39:32.163714 kernel: audit: type=1334 audit(1769038772.078:581): prog-id=194 op=UNLOAD Jan 21 23:39:32.078000 audit[4367]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4116 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:32.180892 kernel: audit: type=1300 audit(1769038772.078:581): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4116 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:32.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934306431313338306532663664653230636361336439303830646165 Jan 21 23:39:32.198757 kernel: audit: type=1327 audit(1769038772.078:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934306431313338306532663664653230636361336439303830646165 Jan 21 23:39:32.078000 audit: BPF prog-id=193 op=UNLOAD Jan 21 23:39:32.204323 kernel: audit: type=1334 audit(1769038772.078:582): prog-id=193 op=UNLOAD Jan 21 23:39:32.078000 audit[4367]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4116 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:32.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934306431313338306532663664653230636361336439303830646165 Jan 21 23:39:32.078000 audit: BPF prog-id=195 op=LOAD Jan 21 23:39:32.078000 audit[4367]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=4116 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:32.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934306431313338306532663664653230636361336439303830646165 Jan 21 23:39:32.210573 containerd[2113]: time="2026-01-21T23:39:32.210481645Z" level=info msg="StartContainer for \"940d11380e2f6de20cca3d9080daec5e0085fb1696dd0192ae3824e95e623eb9\" returns successfully" Jan 21 23:39:32.691472 kubelet[3545]: E0121 23:39:32.691028 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmwkp" podUID="03532856-4a1c-4971-af49-0f675b6cbf1f" Jan 21 23:39:33.324284 systemd[1]: cri-containerd-940d11380e2f6de20cca3d9080daec5e0085fb1696dd0192ae3824e95e623eb9.scope: Deactivated successfully. Jan 21 23:39:33.324909 systemd[1]: cri-containerd-940d11380e2f6de20cca3d9080daec5e0085fb1696dd0192ae3824e95e623eb9.scope: Consumed 325ms CPU time, 189M memory peak, 165.9M written to disk. Jan 21 23:39:33.325878 containerd[2113]: time="2026-01-21T23:39:33.325666641Z" level=info msg="received container exit event container_id:\"940d11380e2f6de20cca3d9080daec5e0085fb1696dd0192ae3824e95e623eb9\" id:\"940d11380e2f6de20cca3d9080daec5e0085fb1696dd0192ae3824e95e623eb9\" pid:4380 exited_at:{seconds:1769038773 nanos:325221488}" Jan 21 23:39:33.328000 audit: BPF prog-id=195 op=UNLOAD Jan 21 23:39:33.344521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-940d11380e2f6de20cca3d9080daec5e0085fb1696dd0192ae3824e95e623eb9-rootfs.mount: Deactivated successfully. Jan 21 23:39:33.388412 kubelet[3545]: I0121 23:39:33.388172 3545 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 21 23:39:34.252944 systemd[1]: Created slice kubepods-besteffort-podb1e000bd_2ebe_4f78_af48_a2456035e42f.slice - libcontainer container kubepods-besteffort-podb1e000bd_2ebe_4f78_af48_a2456035e42f.slice. Jan 21 23:39:34.260537 kubelet[3545]: I0121 23:39:34.260231 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f-config\") pod \"goldmane-666569f655-wsfxd\" (UID: \"40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f\") " pod="calico-system/goldmane-666569f655-wsfxd" Jan 21 23:39:34.260537 kubelet[3545]: I0121 23:39:34.260269 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxhrs\" (UniqueName: \"kubernetes.io/projected/b1e000bd-2ebe-4f78-af48-a2456035e42f-kube-api-access-dxhrs\") pod \"calico-kube-controllers-84459bb977-fzglc\" (UID: \"b1e000bd-2ebe-4f78-af48-a2456035e42f\") " pod="calico-system/calico-kube-controllers-84459bb977-fzglc" Jan 21 23:39:34.260537 kubelet[3545]: I0121 23:39:34.260398 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkjgx\" (UniqueName: \"kubernetes.io/projected/40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f-kube-api-access-dkjgx\") pod \"goldmane-666569f655-wsfxd\" (UID: \"40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f\") " pod="calico-system/goldmane-666569f655-wsfxd" Jan 21 23:39:34.260537 kubelet[3545]: I0121 23:39:34.260426 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1e000bd-2ebe-4f78-af48-a2456035e42f-tigera-ca-bundle\") pod \"calico-kube-controllers-84459bb977-fzglc\" (UID: \"b1e000bd-2ebe-4f78-af48-a2456035e42f\") " pod="calico-system/calico-kube-controllers-84459bb977-fzglc" Jan 21 23:39:34.260908 kubelet[3545]: I0121 23:39:34.260563 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f-goldmane-ca-bundle\") pod \"goldmane-666569f655-wsfxd\" (UID: \"40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f\") " pod="calico-system/goldmane-666569f655-wsfxd" Jan 21 23:39:34.260908 kubelet[3545]: I0121 23:39:34.260579 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f-goldmane-key-pair\") pod \"goldmane-666569f655-wsfxd\" (UID: \"40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f\") " pod="calico-system/goldmane-666569f655-wsfxd" Jan 21 23:39:34.263451 systemd[1]: Created slice kubepods-besteffort-pod40afe29b_91ee_4738_9c1a_8ee8dd5c9d9f.slice - libcontainer container kubepods-besteffort-pod40afe29b_91ee_4738_9c1a_8ee8dd5c9d9f.slice. Jan 21 23:39:34.271035 systemd[1]: Created slice kubepods-besteffort-podfe4222fd_b3e4_4022_8b44_793668b7e61d.slice - libcontainer container kubepods-besteffort-podfe4222fd_b3e4_4022_8b44_793668b7e61d.slice. Jan 21 23:39:34.277626 systemd[1]: Created slice kubepods-besteffort-pod03532856_4a1c_4971_af49_0f675b6cbf1f.slice - libcontainer container kubepods-besteffort-pod03532856_4a1c_4971_af49_0f675b6cbf1f.slice. Jan 21 23:39:34.280962 containerd[2113]: time="2026-01-21T23:39:34.280722718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vmwkp,Uid:03532856-4a1c-4971-af49-0f675b6cbf1f,Namespace:calico-system,Attempt:0,}" Jan 21 23:39:34.285052 systemd[1]: Created slice kubepods-besteffort-pod67e69fef_e284_4866_bf16_ca5d0645fcac.slice - libcontainer container kubepods-besteffort-pod67e69fef_e284_4866_bf16_ca5d0645fcac.slice. Jan 21 23:39:34.290873 systemd[1]: Created slice kubepods-burstable-pod7707a694_44de_42a3_8dd7_f5d1565db734.slice - libcontainer container kubepods-burstable-pod7707a694_44de_42a3_8dd7_f5d1565db734.slice. Jan 21 23:39:34.303229 systemd[1]: Created slice kubepods-burstable-pod6e0b4320_3f37_40f4_a5ce_f14aca373850.slice - libcontainer container kubepods-burstable-pod6e0b4320_3f37_40f4_a5ce_f14aca373850.slice. Jan 21 23:39:34.312631 systemd[1]: Created slice kubepods-besteffort-podb0a63e37_e85d_4a24_a001_b2e5766cb244.slice - libcontainer container kubepods-besteffort-podb0a63e37_e85d_4a24_a001_b2e5766cb244.slice. Jan 21 23:39:34.349703 containerd[2113]: time="2026-01-21T23:39:34.349657440Z" level=error msg="Failed to destroy network for sandbox \"9aa05a9e6d0811f039908cde6eb8dcf2df4c4d70e1791dd943e58fbc36c39085\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.351732 systemd[1]: run-netns-cni\x2dbf13fde5\x2d8b5e\x2d4292\x2d92c0\x2dd76ed55043ea.mount: Deactivated successfully. Jan 21 23:39:34.356247 containerd[2113]: time="2026-01-21T23:39:34.356140223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vmwkp,Uid:03532856-4a1c-4971-af49-0f675b6cbf1f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aa05a9e6d0811f039908cde6eb8dcf2df4c4d70e1791dd943e58fbc36c39085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.356698 kubelet[3545]: E0121 23:39:34.356345 3545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aa05a9e6d0811f039908cde6eb8dcf2df4c4d70e1791dd943e58fbc36c39085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.356698 kubelet[3545]: E0121 23:39:34.356398 3545 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aa05a9e6d0811f039908cde6eb8dcf2df4c4d70e1791dd943e58fbc36c39085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vmwkp" Jan 21 23:39:34.356698 kubelet[3545]: E0121 23:39:34.356415 3545 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aa05a9e6d0811f039908cde6eb8dcf2df4c4d70e1791dd943e58fbc36c39085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vmwkp" Jan 21 23:39:34.356786 kubelet[3545]: E0121 23:39:34.356458 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vmwkp_calico-system(03532856-4a1c-4971-af49-0f675b6cbf1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vmwkp_calico-system(03532856-4a1c-4971-af49-0f675b6cbf1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9aa05a9e6d0811f039908cde6eb8dcf2df4c4d70e1791dd943e58fbc36c39085\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vmwkp" podUID="03532856-4a1c-4971-af49-0f675b6cbf1f" Jan 21 23:39:34.360847 kubelet[3545]: I0121 23:39:34.360819 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7707a694-44de-42a3-8dd7-f5d1565db734-config-volume\") pod \"coredns-674b8bbfcf-btpgn\" (UID: \"7707a694-44de-42a3-8dd7-f5d1565db734\") " pod="kube-system/coredns-674b8bbfcf-btpgn" Jan 21 23:39:34.360847 kubelet[3545]: I0121 23:39:34.360855 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/67e69fef-e284-4866-bf16-ca5d0645fcac-calico-apiserver-certs\") pod \"calico-apiserver-86f4fc7866-w9csf\" (UID: \"67e69fef-e284-4866-bf16-ca5d0645fcac\") " pod="calico-apiserver/calico-apiserver-86f4fc7866-w9csf" Jan 21 23:39:34.360972 kubelet[3545]: I0121 23:39:34.360872 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fe4222fd-b3e4-4022-8b44-793668b7e61d-calico-apiserver-certs\") pod \"calico-apiserver-86f4fc7866-d746f\" (UID: \"fe4222fd-b3e4-4022-8b44-793668b7e61d\") " pod="calico-apiserver/calico-apiserver-86f4fc7866-d746f" Jan 21 23:39:34.360972 kubelet[3545]: I0121 23:39:34.360889 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e0b4320-3f37-40f4-a5ce-f14aca373850-config-volume\") pod \"coredns-674b8bbfcf-5h2h6\" (UID: \"6e0b4320-3f37-40f4-a5ce-f14aca373850\") " pod="kube-system/coredns-674b8bbfcf-5h2h6" Jan 21 23:39:34.360972 kubelet[3545]: I0121 23:39:34.360899 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5crfq\" (UniqueName: \"kubernetes.io/projected/6e0b4320-3f37-40f4-a5ce-f14aca373850-kube-api-access-5crfq\") pod \"coredns-674b8bbfcf-5h2h6\" (UID: \"6e0b4320-3f37-40f4-a5ce-f14aca373850\") " pod="kube-system/coredns-674b8bbfcf-5h2h6" Jan 21 23:39:34.360972 kubelet[3545]: I0121 23:39:34.360909 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65k8x\" (UniqueName: \"kubernetes.io/projected/7707a694-44de-42a3-8dd7-f5d1565db734-kube-api-access-65k8x\") pod \"coredns-674b8bbfcf-btpgn\" (UID: \"7707a694-44de-42a3-8dd7-f5d1565db734\") " pod="kube-system/coredns-674b8bbfcf-btpgn" Jan 21 23:39:34.360972 kubelet[3545]: I0121 23:39:34.360942 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b0a63e37-e85d-4a24-a001-b2e5766cb244-whisker-backend-key-pair\") pod \"whisker-6f7958d5db-b5bqn\" (UID: \"b0a63e37-e85d-4a24-a001-b2e5766cb244\") " pod="calico-system/whisker-6f7958d5db-b5bqn" Jan 21 23:39:34.361078 kubelet[3545]: I0121 23:39:34.360952 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0a63e37-e85d-4a24-a001-b2e5766cb244-whisker-ca-bundle\") pod \"whisker-6f7958d5db-b5bqn\" (UID: \"b0a63e37-e85d-4a24-a001-b2e5766cb244\") " pod="calico-system/whisker-6f7958d5db-b5bqn" Jan 21 23:39:34.361078 kubelet[3545]: I0121 23:39:34.360961 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr76v\" (UniqueName: \"kubernetes.io/projected/b0a63e37-e85d-4a24-a001-b2e5766cb244-kube-api-access-lr76v\") pod \"whisker-6f7958d5db-b5bqn\" (UID: \"b0a63e37-e85d-4a24-a001-b2e5766cb244\") " pod="calico-system/whisker-6f7958d5db-b5bqn" Jan 21 23:39:34.361078 kubelet[3545]: I0121 23:39:34.360979 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pszq2\" (UniqueName: \"kubernetes.io/projected/fe4222fd-b3e4-4022-8b44-793668b7e61d-kube-api-access-pszq2\") pod \"calico-apiserver-86f4fc7866-d746f\" (UID: \"fe4222fd-b3e4-4022-8b44-793668b7e61d\") " pod="calico-apiserver/calico-apiserver-86f4fc7866-d746f" Jan 21 23:39:34.361078 kubelet[3545]: I0121 23:39:34.360997 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4r6h\" (UniqueName: \"kubernetes.io/projected/67e69fef-e284-4866-bf16-ca5d0645fcac-kube-api-access-w4r6h\") pod \"calico-apiserver-86f4fc7866-w9csf\" (UID: \"67e69fef-e284-4866-bf16-ca5d0645fcac\") " pod="calico-apiserver/calico-apiserver-86f4fc7866-w9csf" Jan 21 23:39:34.559797 containerd[2113]: time="2026-01-21T23:39:34.559684788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84459bb977-fzglc,Uid:b1e000bd-2ebe-4f78-af48-a2456035e42f,Namespace:calico-system,Attempt:0,}" Jan 21 23:39:34.575445 containerd[2113]: time="2026-01-21T23:39:34.575246305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wsfxd,Uid:40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f,Namespace:calico-system,Attempt:0,}" Jan 21 23:39:34.578385 containerd[2113]: time="2026-01-21T23:39:34.578345795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f4fc7866-d746f,Uid:fe4222fd-b3e4-4022-8b44-793668b7e61d,Namespace:calico-apiserver,Attempt:0,}" Jan 21 23:39:34.598742 containerd[2113]: time="2026-01-21T23:39:34.598680555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f4fc7866-w9csf,Uid:67e69fef-e284-4866-bf16-ca5d0645fcac,Namespace:calico-apiserver,Attempt:0,}" Jan 21 23:39:34.599777 containerd[2113]: time="2026-01-21T23:39:34.599730565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-btpgn,Uid:7707a694-44de-42a3-8dd7-f5d1565db734,Namespace:kube-system,Attempt:0,}" Jan 21 23:39:34.609027 containerd[2113]: time="2026-01-21T23:39:34.608956824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5h2h6,Uid:6e0b4320-3f37-40f4-a5ce-f14aca373850,Namespace:kube-system,Attempt:0,}" Jan 21 23:39:34.619661 containerd[2113]: time="2026-01-21T23:39:34.619592363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f7958d5db-b5bqn,Uid:b0a63e37-e85d-4a24-a001-b2e5766cb244,Namespace:calico-system,Attempt:0,}" Jan 21 23:39:34.630459 containerd[2113]: time="2026-01-21T23:39:34.630385755Z" level=error msg="Failed to destroy network for sandbox \"19340f67c9048ae40340974bbc7ae84bb22e21fd29826c89248414ddad9b683c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.662161 containerd[2113]: time="2026-01-21T23:39:34.661926949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84459bb977-fzglc,Uid:b1e000bd-2ebe-4f78-af48-a2456035e42f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"19340f67c9048ae40340974bbc7ae84bb22e21fd29826c89248414ddad9b683c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.662317 kubelet[3545]: E0121 23:39:34.662193 3545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19340f67c9048ae40340974bbc7ae84bb22e21fd29826c89248414ddad9b683c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.662317 kubelet[3545]: E0121 23:39:34.662255 3545 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19340f67c9048ae40340974bbc7ae84bb22e21fd29826c89248414ddad9b683c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84459bb977-fzglc" Jan 21 23:39:34.662317 kubelet[3545]: E0121 23:39:34.662275 3545 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19340f67c9048ae40340974bbc7ae84bb22e21fd29826c89248414ddad9b683c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84459bb977-fzglc" Jan 21 23:39:34.663797 kubelet[3545]: E0121 23:39:34.662969 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84459bb977-fzglc_calico-system(b1e000bd-2ebe-4f78-af48-a2456035e42f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84459bb977-fzglc_calico-system(b1e000bd-2ebe-4f78-af48-a2456035e42f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19340f67c9048ae40340974bbc7ae84bb22e21fd29826c89248414ddad9b683c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84459bb977-fzglc" podUID="b1e000bd-2ebe-4f78-af48-a2456035e42f" Jan 21 23:39:34.682859 containerd[2113]: time="2026-01-21T23:39:34.682799187Z" level=error msg="Failed to destroy network for sandbox \"6b4ac6d52e249da38f86b832ad0b3a905f7fbe0fd08268f0f057d846aac2903d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.684451 containerd[2113]: time="2026-01-21T23:39:34.684409842Z" level=error msg="Failed to destroy network for sandbox \"ec9ad37440b929fd043b981ca6d5b7f4f385de8a9eaceda88abc80345aa317e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.692672 containerd[2113]: time="2026-01-21T23:39:34.692585660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f4fc7866-d746f,Uid:fe4222fd-b3e4-4022-8b44-793668b7e61d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b4ac6d52e249da38f86b832ad0b3a905f7fbe0fd08268f0f057d846aac2903d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.692800 kubelet[3545]: E0121 23:39:34.692733 3545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b4ac6d52e249da38f86b832ad0b3a905f7fbe0fd08268f0f057d846aac2903d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.692800 kubelet[3545]: E0121 23:39:34.692784 3545 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b4ac6d52e249da38f86b832ad0b3a905f7fbe0fd08268f0f057d846aac2903d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f4fc7866-d746f" Jan 21 23:39:34.692862 kubelet[3545]: E0121 23:39:34.692801 3545 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b4ac6d52e249da38f86b832ad0b3a905f7fbe0fd08268f0f057d846aac2903d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f4fc7866-d746f" Jan 21 23:39:34.692862 kubelet[3545]: E0121 23:39:34.692842 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86f4fc7866-d746f_calico-apiserver(fe4222fd-b3e4-4022-8b44-793668b7e61d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86f4fc7866-d746f_calico-apiserver(fe4222fd-b3e4-4022-8b44-793668b7e61d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b4ac6d52e249da38f86b832ad0b3a905f7fbe0fd08268f0f057d846aac2903d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-d746f" podUID="fe4222fd-b3e4-4022-8b44-793668b7e61d" Jan 21 23:39:34.699803 containerd[2113]: time="2026-01-21T23:39:34.699251394Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wsfxd,Uid:40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec9ad37440b929fd043b981ca6d5b7f4f385de8a9eaceda88abc80345aa317e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.700452 kubelet[3545]: E0121 23:39:34.700177 3545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec9ad37440b929fd043b981ca6d5b7f4f385de8a9eaceda88abc80345aa317e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.700452 kubelet[3545]: E0121 23:39:34.700226 3545 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec9ad37440b929fd043b981ca6d5b7f4f385de8a9eaceda88abc80345aa317e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wsfxd" Jan 21 23:39:34.700452 kubelet[3545]: E0121 23:39:34.700239 3545 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec9ad37440b929fd043b981ca6d5b7f4f385de8a9eaceda88abc80345aa317e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wsfxd" Jan 21 23:39:34.700572 kubelet[3545]: E0121 23:39:34.700277 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-wsfxd_calico-system(40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-wsfxd_calico-system(40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec9ad37440b929fd043b981ca6d5b7f4f385de8a9eaceda88abc80345aa317e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-wsfxd" podUID="40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f" Jan 21 23:39:34.735687 containerd[2113]: time="2026-01-21T23:39:34.735636115Z" level=error msg="Failed to destroy network for sandbox \"0a2c4e9cf06f4044980296737e5d1fa699233b42795b6a5778888d2e9b31b2ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.736967 containerd[2113]: time="2026-01-21T23:39:34.736927726Z" level=error msg="Failed to destroy network for sandbox \"42fc1c73c2af63f461a8015b38bf3912524442ee03227c5f25c480c1a99a2429\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.744491 containerd[2113]: time="2026-01-21T23:39:34.744440069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-btpgn,Uid:7707a694-44de-42a3-8dd7-f5d1565db734,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a2c4e9cf06f4044980296737e5d1fa699233b42795b6a5778888d2e9b31b2ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.744819 kubelet[3545]: E0121 23:39:34.744778 3545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a2c4e9cf06f4044980296737e5d1fa699233b42795b6a5778888d2e9b31b2ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.744879 kubelet[3545]: E0121 23:39:34.744867 3545 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a2c4e9cf06f4044980296737e5d1fa699233b42795b6a5778888d2e9b31b2ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-btpgn" Jan 21 23:39:34.744900 kubelet[3545]: E0121 23:39:34.744887 3545 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a2c4e9cf06f4044980296737e5d1fa699233b42795b6a5778888d2e9b31b2ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-btpgn" Jan 21 23:39:34.745168 kubelet[3545]: E0121 23:39:34.744944 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-btpgn_kube-system(7707a694-44de-42a3-8dd7-f5d1565db734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-btpgn_kube-system(7707a694-44de-42a3-8dd7-f5d1565db734)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a2c4e9cf06f4044980296737e5d1fa699233b42795b6a5778888d2e9b31b2ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-btpgn" podUID="7707a694-44de-42a3-8dd7-f5d1565db734" Jan 21 23:39:34.747834 containerd[2113]: time="2026-01-21T23:39:34.747212978Z" level=error msg="Failed to destroy network for sandbox \"23218f9d69aeb14e53ccc8416d2dd543b23729846c5692ce4455ec6081c25b50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.751687 containerd[2113]: time="2026-01-21T23:39:34.751654161Z" level=error msg="Failed to destroy network for sandbox \"e7bc162154d2b085da58c8978470c5b245a66f18d7723293b7f751f421958fc2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.752688 containerd[2113]: time="2026-01-21T23:39:34.752660633Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f4fc7866-w9csf,Uid:67e69fef-e284-4866-bf16-ca5d0645fcac,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"42fc1c73c2af63f461a8015b38bf3912524442ee03227c5f25c480c1a99a2429\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.753037 kubelet[3545]: E0121 23:39:34.752980 3545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42fc1c73c2af63f461a8015b38bf3912524442ee03227c5f25c480c1a99a2429\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.753395 kubelet[3545]: E0121 23:39:34.753367 3545 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42fc1c73c2af63f461a8015b38bf3912524442ee03227c5f25c480c1a99a2429\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f4fc7866-w9csf" Jan 21 23:39:34.753419 kubelet[3545]: E0121 23:39:34.753401 3545 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42fc1c73c2af63f461a8015b38bf3912524442ee03227c5f25c480c1a99a2429\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f4fc7866-w9csf" Jan 21 23:39:34.753481 kubelet[3545]: E0121 23:39:34.753455 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86f4fc7866-w9csf_calico-apiserver(67e69fef-e284-4866-bf16-ca5d0645fcac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86f4fc7866-w9csf_calico-apiserver(67e69fef-e284-4866-bf16-ca5d0645fcac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42fc1c73c2af63f461a8015b38bf3912524442ee03227c5f25c480c1a99a2429\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-w9csf" podUID="67e69fef-e284-4866-bf16-ca5d0645fcac" Jan 21 23:39:34.760150 containerd[2113]: time="2026-01-21T23:39:34.760107286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f7958d5db-b5bqn,Uid:b0a63e37-e85d-4a24-a001-b2e5766cb244,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"23218f9d69aeb14e53ccc8416d2dd543b23729846c5692ce4455ec6081c25b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.760405 kubelet[3545]: E0121 23:39:34.760364 3545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23218f9d69aeb14e53ccc8416d2dd543b23729846c5692ce4455ec6081c25b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.760439 kubelet[3545]: E0121 23:39:34.760416 3545 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23218f9d69aeb14e53ccc8416d2dd543b23729846c5692ce4455ec6081c25b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f7958d5db-b5bqn" Jan 21 23:39:34.760439 kubelet[3545]: E0121 23:39:34.760431 3545 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23218f9d69aeb14e53ccc8416d2dd543b23729846c5692ce4455ec6081c25b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f7958d5db-b5bqn" Jan 21 23:39:34.760489 kubelet[3545]: E0121 23:39:34.760465 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f7958d5db-b5bqn_calico-system(b0a63e37-e85d-4a24-a001-b2e5766cb244)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f7958d5db-b5bqn_calico-system(b0a63e37-e85d-4a24-a001-b2e5766cb244)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23218f9d69aeb14e53ccc8416d2dd543b23729846c5692ce4455ec6081c25b50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f7958d5db-b5bqn" podUID="b0a63e37-e85d-4a24-a001-b2e5766cb244" Jan 21 23:39:34.762604 containerd[2113]: time="2026-01-21T23:39:34.762548846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5h2h6,Uid:6e0b4320-3f37-40f4-a5ce-f14aca373850,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7bc162154d2b085da58c8978470c5b245a66f18d7723293b7f751f421958fc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.762754 kubelet[3545]: E0121 23:39:34.762722 3545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7bc162154d2b085da58c8978470c5b245a66f18d7723293b7f751f421958fc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:39:34.762786 kubelet[3545]: E0121 23:39:34.762767 3545 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7bc162154d2b085da58c8978470c5b245a66f18d7723293b7f751f421958fc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5h2h6" Jan 21 23:39:34.762786 kubelet[3545]: E0121 23:39:34.762782 3545 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7bc162154d2b085da58c8978470c5b245a66f18d7723293b7f751f421958fc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5h2h6" Jan 21 23:39:34.762847 kubelet[3545]: E0121 23:39:34.762811 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-5h2h6_kube-system(6e0b4320-3f37-40f4-a5ce-f14aca373850)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-5h2h6_kube-system(6e0b4320-3f37-40f4-a5ce-f14aca373850)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7bc162154d2b085da58c8978470c5b245a66f18d7723293b7f751f421958fc2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-5h2h6" podUID="6e0b4320-3f37-40f4-a5ce-f14aca373850" Jan 21 23:39:34.880845 containerd[2113]: time="2026-01-21T23:39:34.880707690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 21 23:39:38.506328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2691597932.mount: Deactivated successfully. Jan 21 23:39:38.634711 containerd[2113]: time="2026-01-21T23:39:38.634641732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:39:38.637063 containerd[2113]: time="2026-01-21T23:39:38.636880922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 21 23:39:38.639567 containerd[2113]: time="2026-01-21T23:39:38.639521861Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:39:38.643083 containerd[2113]: time="2026-01-21T23:39:38.642956556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:39:38.643455 containerd[2113]: time="2026-01-21T23:39:38.643326777Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.762571141s" Jan 21 23:39:38.643455 containerd[2113]: time="2026-01-21T23:39:38.643359290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 21 23:39:38.667243 containerd[2113]: time="2026-01-21T23:39:38.667197252Z" level=info msg="CreateContainer within sandbox \"4f7333066431df8a8604c3627ef28985da3e0af7ca8276e28ce3164fefd361a3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 21 23:39:38.692757 containerd[2113]: time="2026-01-21T23:39:38.692454792Z" level=info msg="Container b73b6088c5cd7dd0efd4e0be3da8ce4f530370d58d8dfbd85ff76c3239c36d24: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:39:38.695340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2516719332.mount: Deactivated successfully. Jan 21 23:39:38.713262 containerd[2113]: time="2026-01-21T23:39:38.713207439Z" level=info msg="CreateContainer within sandbox \"4f7333066431df8a8604c3627ef28985da3e0af7ca8276e28ce3164fefd361a3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b73b6088c5cd7dd0efd4e0be3da8ce4f530370d58d8dfbd85ff76c3239c36d24\"" Jan 21 23:39:38.714484 containerd[2113]: time="2026-01-21T23:39:38.714448098Z" level=info msg="StartContainer for \"b73b6088c5cd7dd0efd4e0be3da8ce4f530370d58d8dfbd85ff76c3239c36d24\"" Jan 21 23:39:38.716058 containerd[2113]: time="2026-01-21T23:39:38.716021664Z" level=info msg="connecting to shim b73b6088c5cd7dd0efd4e0be3da8ce4f530370d58d8dfbd85ff76c3239c36d24" address="unix:///run/containerd/s/28ebc5892c3e712a3145516d059fd1e9e743ad98781a45fe6b70f80feb33d4af" protocol=ttrpc version=3 Jan 21 23:39:38.733256 systemd[1]: Started cri-containerd-b73b6088c5cd7dd0efd4e0be3da8ce4f530370d58d8dfbd85ff76c3239c36d24.scope - libcontainer container b73b6088c5cd7dd0efd4e0be3da8ce4f530370d58d8dfbd85ff76c3239c36d24. Jan 21 23:39:38.775000 audit: BPF prog-id=196 op=LOAD Jan 21 23:39:38.778215 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 21 23:39:38.778313 kernel: audit: type=1334 audit(1769038778.775:585): prog-id=196 op=LOAD Jan 21 23:39:38.775000 audit[4635]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4116 pid=4635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:38.798807 kernel: audit: type=1300 audit(1769038778.775:585): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4116 pid=4635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:38.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237336236303838633563643764643065666434653062653364613863 Jan 21 23:39:38.816096 kernel: audit: type=1327 audit(1769038778.775:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237336236303838633563643764643065666434653062653364613863 Jan 21 23:39:38.777000 audit: BPF prog-id=197 op=LOAD Jan 21 23:39:38.821388 kernel: audit: type=1334 audit(1769038778.777:586): prog-id=197 op=LOAD Jan 21 23:39:38.777000 audit[4635]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4116 pid=4635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:38.838117 kernel: audit: type=1300 audit(1769038778.777:586): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4116 pid=4635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:38.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237336236303838633563643764643065666434653062653364613863 Jan 21 23:39:38.855158 kernel: audit: type=1327 audit(1769038778.777:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237336236303838633563643764643065666434653062653364613863 Jan 21 23:39:38.778000 audit: BPF prog-id=197 op=UNLOAD Jan 21 23:39:38.860295 kernel: audit: type=1334 audit(1769038778.778:587): prog-id=197 op=UNLOAD Jan 21 23:39:38.778000 audit[4635]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4116 pid=4635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:38.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237336236303838633563643764643065666434653062653364613863 Jan 21 23:39:38.894359 kernel: audit: type=1300 audit(1769038778.778:587): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4116 pid=4635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:38.894486 kernel: audit: type=1327 audit(1769038778.778:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237336236303838633563643764643065666434653062653364613863 Jan 21 23:39:38.778000 audit: BPF prog-id=196 op=UNLOAD Jan 21 23:39:38.899996 kernel: audit: type=1334 audit(1769038778.778:588): prog-id=196 op=UNLOAD Jan 21 23:39:38.778000 audit[4635]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4116 pid=4635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:38.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237336236303838633563643764643065666434653062653364613863 Jan 21 23:39:38.778000 audit: BPF prog-id=198 op=LOAD Jan 21 23:39:38.778000 audit[4635]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4116 pid=4635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:38.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237336236303838633563643764643065666434653062653364613863 Jan 21 23:39:38.912208 containerd[2113]: time="2026-01-21T23:39:38.912120955Z" level=info msg="StartContainer for \"b73b6088c5cd7dd0efd4e0be3da8ce4f530370d58d8dfbd85ff76c3239c36d24\" returns successfully" Jan 21 23:39:39.079658 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 21 23:39:39.079799 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 21 23:39:39.403638 kubelet[3545]: I0121 23:39:39.403203 3545 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b0a63e37-e85d-4a24-a001-b2e5766cb244-whisker-backend-key-pair\") pod \"b0a63e37-e85d-4a24-a001-b2e5766cb244\" (UID: \"b0a63e37-e85d-4a24-a001-b2e5766cb244\") " Jan 21 23:39:39.403638 kubelet[3545]: I0121 23:39:39.403259 3545 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0a63e37-e85d-4a24-a001-b2e5766cb244-whisker-ca-bundle\") pod \"b0a63e37-e85d-4a24-a001-b2e5766cb244\" (UID: \"b0a63e37-e85d-4a24-a001-b2e5766cb244\") " Jan 21 23:39:39.403638 kubelet[3545]: I0121 23:39:39.403280 3545 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr76v\" (UniqueName: \"kubernetes.io/projected/b0a63e37-e85d-4a24-a001-b2e5766cb244-kube-api-access-lr76v\") pod \"b0a63e37-e85d-4a24-a001-b2e5766cb244\" (UID: \"b0a63e37-e85d-4a24-a001-b2e5766cb244\") " Jan 21 23:39:39.405363 kubelet[3545]: I0121 23:39:39.405323 3545 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0a63e37-e85d-4a24-a001-b2e5766cb244-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b0a63e37-e85d-4a24-a001-b2e5766cb244" (UID: "b0a63e37-e85d-4a24-a001-b2e5766cb244"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 23:39:39.406166 kubelet[3545]: I0121 23:39:39.406138 3545 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a63e37-e85d-4a24-a001-b2e5766cb244-kube-api-access-lr76v" (OuterVolumeSpecName: "kube-api-access-lr76v") pod "b0a63e37-e85d-4a24-a001-b2e5766cb244" (UID: "b0a63e37-e85d-4a24-a001-b2e5766cb244"). InnerVolumeSpecName "kube-api-access-lr76v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 23:39:39.406281 kubelet[3545]: I0121 23:39:39.406162 3545 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a63e37-e85d-4a24-a001-b2e5766cb244-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b0a63e37-e85d-4a24-a001-b2e5766cb244" (UID: "b0a63e37-e85d-4a24-a001-b2e5766cb244"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 23:39:39.503707 kubelet[3545]: I0121 23:39:39.503664 3545 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lr76v\" (UniqueName: \"kubernetes.io/projected/b0a63e37-e85d-4a24-a001-b2e5766cb244-kube-api-access-lr76v\") on node \"ci-4515.1.0-n-a0ba06055b\" DevicePath \"\"" Jan 21 23:39:39.503707 kubelet[3545]: I0121 23:39:39.503704 3545 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b0a63e37-e85d-4a24-a001-b2e5766cb244-whisker-backend-key-pair\") on node \"ci-4515.1.0-n-a0ba06055b\" DevicePath \"\"" Jan 21 23:39:39.503707 kubelet[3545]: I0121 23:39:39.503713 3545 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0a63e37-e85d-4a24-a001-b2e5766cb244-whisker-ca-bundle\") on node \"ci-4515.1.0-n-a0ba06055b\" DevicePath \"\"" Jan 21 23:39:39.507121 systemd[1]: var-lib-kubelet-pods-b0a63e37\x2de85d\x2d4a24\x2da001\x2db2e5766cb244-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlr76v.mount: Deactivated successfully. Jan 21 23:39:39.507233 systemd[1]: var-lib-kubelet-pods-b0a63e37\x2de85d\x2d4a24\x2da001\x2db2e5766cb244-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 21 23:39:39.533286 kubelet[3545]: I0121 23:39:39.533249 3545 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 23:39:39.566000 audit[4695]: NETFILTER_CFG table=filter:120 family=2 entries=21 op=nft_register_rule pid=4695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:39.566000 audit[4695]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcd4c7ce0 a2=0 a3=1 items=0 ppid=3761 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:39.566000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:39.570000 audit[4695]: NETFILTER_CFG table=nat:121 family=2 entries=19 op=nft_register_chain pid=4695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:39.570000 audit[4695]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffcd4c7ce0 a2=0 a3=1 items=0 ppid=3761 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:39.570000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:39.921844 systemd[1]: Removed slice kubepods-besteffort-podb0a63e37_e85d_4a24_a001_b2e5766cb244.slice - libcontainer container kubepods-besteffort-podb0a63e37_e85d_4a24_a001_b2e5766cb244.slice. Jan 21 23:39:39.953674 kubelet[3545]: I0121 23:39:39.953611 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-78df5" podStartSLOduration=2.312391833 podStartE2EDuration="15.953583874s" podCreationTimestamp="2026-01-21 23:39:24 +0000 UTC" firstStartedPulling="2026-01-21 23:39:25.003229958 +0000 UTC m=+22.453089840" lastFinishedPulling="2026-01-21 23:39:38.644421999 +0000 UTC m=+36.094281881" observedRunningTime="2026-01-21 23:39:39.941735238 +0000 UTC m=+37.391595120" watchObservedRunningTime="2026-01-21 23:39:39.953583874 +0000 UTC m=+37.403443756" Jan 21 23:39:40.013777 systemd[1]: Created slice kubepods-besteffort-podfd7f2311_8da9_446b_ab8a_7da03038d65b.slice - libcontainer container kubepods-besteffort-podfd7f2311_8da9_446b_ab8a_7da03038d65b.slice. Jan 21 23:39:40.107726 kubelet[3545]: I0121 23:39:40.107657 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fd7f2311-8da9-446b-ab8a-7da03038d65b-whisker-backend-key-pair\") pod \"whisker-5d9759fcd4-9gjqv\" (UID: \"fd7f2311-8da9-446b-ab8a-7da03038d65b\") " pod="calico-system/whisker-5d9759fcd4-9gjqv" Jan 21 23:39:40.107726 kubelet[3545]: I0121 23:39:40.107702 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjhg6\" (UniqueName: \"kubernetes.io/projected/fd7f2311-8da9-446b-ab8a-7da03038d65b-kube-api-access-xjhg6\") pod \"whisker-5d9759fcd4-9gjqv\" (UID: \"fd7f2311-8da9-446b-ab8a-7da03038d65b\") " pod="calico-system/whisker-5d9759fcd4-9gjqv" Jan 21 23:39:40.107726 kubelet[3545]: I0121 23:39:40.107731 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd7f2311-8da9-446b-ab8a-7da03038d65b-whisker-ca-bundle\") pod \"whisker-5d9759fcd4-9gjqv\" (UID: \"fd7f2311-8da9-446b-ab8a-7da03038d65b\") " pod="calico-system/whisker-5d9759fcd4-9gjqv" Jan 21 23:39:40.318704 containerd[2113]: time="2026-01-21T23:39:40.318425638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d9759fcd4-9gjqv,Uid:fd7f2311-8da9-446b-ab8a-7da03038d65b,Namespace:calico-system,Attempt:0,}" Jan 21 23:39:40.476673 systemd-networkd[1703]: cali42a0fa90703: Link UP Jan 21 23:39:40.479818 systemd-networkd[1703]: cali42a0fa90703: Gained carrier Jan 21 23:39:40.499546 containerd[2113]: 2026-01-21 23:39:40.346 [INFO][4699] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 23:39:40.499546 containerd[2113]: 2026-01-21 23:39:40.389 [INFO][4699] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--a0ba06055b-k8s-whisker--5d9759fcd4--9gjqv-eth0 whisker-5d9759fcd4- calico-system fd7f2311-8da9-446b-ab8a-7da03038d65b 878 0 2026-01-21 23:39:39 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5d9759fcd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515.1.0-n-a0ba06055b whisker-5d9759fcd4-9gjqv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali42a0fa90703 [] [] }} ContainerID="61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf" Namespace="calico-system" Pod="whisker-5d9759fcd4-9gjqv" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-whisker--5d9759fcd4--9gjqv-" Jan 21 23:39:40.499546 containerd[2113]: 2026-01-21 23:39:40.389 [INFO][4699] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf" Namespace="calico-system" Pod="whisker-5d9759fcd4-9gjqv" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-whisker--5d9759fcd4--9gjqv-eth0" Jan 21 23:39:40.499546 containerd[2113]: 2026-01-21 23:39:40.419 [INFO][4711] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf" HandleID="k8s-pod-network.61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf" Workload="ci--4515.1.0--n--a0ba06055b-k8s-whisker--5d9759fcd4--9gjqv-eth0" Jan 21 23:39:40.499971 containerd[2113]: 2026-01-21 23:39:40.421 [INFO][4711] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf" HandleID="k8s-pod-network.61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf" Workload="ci--4515.1.0--n--a0ba06055b-k8s-whisker--5d9759fcd4--9gjqv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000255bd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-n-a0ba06055b", "pod":"whisker-5d9759fcd4-9gjqv", "timestamp":"2026-01-21 23:39:40.419638979 +0000 UTC"}, Hostname:"ci-4515.1.0-n-a0ba06055b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 23:39:40.499971 containerd[2113]: 2026-01-21 23:39:40.421 [INFO][4711] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 23:39:40.499971 containerd[2113]: 2026-01-21 23:39:40.421 [INFO][4711] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 23:39:40.499971 containerd[2113]: 2026-01-21 23:39:40.421 [INFO][4711] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-a0ba06055b' Jan 21 23:39:40.499971 containerd[2113]: 2026-01-21 23:39:40.428 [INFO][4711] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:40.499971 containerd[2113]: 2026-01-21 23:39:40.431 [INFO][4711] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:40.499971 containerd[2113]: 2026-01-21 23:39:40.435 [INFO][4711] ipam/ipam.go 511: Trying affinity for 192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:40.499971 containerd[2113]: 2026-01-21 23:39:40.436 [INFO][4711] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:40.499971 containerd[2113]: 2026-01-21 23:39:40.438 [INFO][4711] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:40.500136 containerd[2113]: 2026-01-21 23:39:40.438 [INFO][4711] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.64/26 handle="k8s-pod-network.61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:40.500136 containerd[2113]: 2026-01-21 23:39:40.440 [INFO][4711] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf Jan 21 23:39:40.500136 containerd[2113]: 2026-01-21 23:39:40.444 [INFO][4711] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.64/26 handle="k8s-pod-network.61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:40.500136 containerd[2113]: 2026-01-21 23:39:40.451 [INFO][4711] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.65/26] block=192.168.73.64/26 handle="k8s-pod-network.61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:40.500136 containerd[2113]: 2026-01-21 23:39:40.452 [INFO][4711] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.65/26] handle="k8s-pod-network.61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:40.500136 containerd[2113]: 2026-01-21 23:39:40.452 [INFO][4711] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 23:39:40.500136 containerd[2113]: 2026-01-21 23:39:40.452 [INFO][4711] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.65/26] IPv6=[] ContainerID="61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf" HandleID="k8s-pod-network.61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf" Workload="ci--4515.1.0--n--a0ba06055b-k8s-whisker--5d9759fcd4--9gjqv-eth0" Jan 21 23:39:40.500235 containerd[2113]: 2026-01-21 23:39:40.456 [INFO][4699] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf" Namespace="calico-system" Pod="whisker-5d9759fcd4-9gjqv" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-whisker--5d9759fcd4--9gjqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--a0ba06055b-k8s-whisker--5d9759fcd4--9gjqv-eth0", GenerateName:"whisker-5d9759fcd4-", Namespace:"calico-system", SelfLink:"", UID:"fd7f2311-8da9-446b-ab8a-7da03038d65b", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d9759fcd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-a0ba06055b", ContainerID:"", Pod:"whisker-5d9759fcd4-9gjqv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.73.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali42a0fa90703", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:39:40.500235 containerd[2113]: 2026-01-21 23:39:40.456 [INFO][4699] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.65/32] ContainerID="61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf" Namespace="calico-system" Pod="whisker-5d9759fcd4-9gjqv" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-whisker--5d9759fcd4--9gjqv-eth0" Jan 21 23:39:40.500287 containerd[2113]: 2026-01-21 23:39:40.456 [INFO][4699] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42a0fa90703 ContainerID="61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf" Namespace="calico-system" Pod="whisker-5d9759fcd4-9gjqv" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-whisker--5d9759fcd4--9gjqv-eth0" Jan 21 23:39:40.500287 containerd[2113]: 2026-01-21 23:39:40.479 [INFO][4699] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf" Namespace="calico-system" Pod="whisker-5d9759fcd4-9gjqv" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-whisker--5d9759fcd4--9gjqv-eth0" Jan 21 23:39:40.500316 containerd[2113]: 2026-01-21 23:39:40.480 [INFO][4699] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf" Namespace="calico-system" Pod="whisker-5d9759fcd4-9gjqv" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-whisker--5d9759fcd4--9gjqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--a0ba06055b-k8s-whisker--5d9759fcd4--9gjqv-eth0", GenerateName:"whisker-5d9759fcd4-", Namespace:"calico-system", SelfLink:"", UID:"fd7f2311-8da9-446b-ab8a-7da03038d65b", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d9759fcd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-a0ba06055b", ContainerID:"61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf", Pod:"whisker-5d9759fcd4-9gjqv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.73.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali42a0fa90703", MAC:"ce:56:50:28:85:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:39:40.500348 containerd[2113]: 2026-01-21 23:39:40.496 [INFO][4699] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf" Namespace="calico-system" Pod="whisker-5d9759fcd4-9gjqv" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-whisker--5d9759fcd4--9gjqv-eth0" Jan 21 23:39:40.553191 containerd[2113]: time="2026-01-21T23:39:40.553102416Z" level=info msg="connecting to shim 61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf" address="unix:///run/containerd/s/d0d9041c3b17d0d0c5da81e182c27ab728b306a41137a5e1cc4091041f16c900" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:39:40.590720 systemd[1]: Started cri-containerd-61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf.scope - libcontainer container 61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf. Jan 21 23:39:40.614000 audit: BPF prog-id=199 op=LOAD Jan 21 23:39:40.615000 audit: BPF prog-id=200 op=LOAD Jan 21 23:39:40.615000 audit[4830]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4819 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631646363326536363139616537313539623538366430323433646333 Jan 21 23:39:40.615000 audit: BPF prog-id=200 op=UNLOAD Jan 21 23:39:40.615000 audit[4830]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4819 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631646363326536363139616537313539623538366430323433646333 Jan 21 23:39:40.615000 audit: BPF prog-id=201 op=LOAD Jan 21 23:39:40.615000 audit[4830]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4819 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631646363326536363139616537313539623538366430323433646333 Jan 21 23:39:40.616000 audit: BPF prog-id=202 op=LOAD Jan 21 23:39:40.616000 audit[4830]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4819 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631646363326536363139616537313539623538366430323433646333 Jan 21 23:39:40.616000 audit: BPF prog-id=202 op=UNLOAD Jan 21 23:39:40.616000 audit[4830]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4819 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631646363326536363139616537313539623538366430323433646333 Jan 21 23:39:40.616000 audit: BPF prog-id=201 op=UNLOAD Jan 21 23:39:40.616000 audit[4830]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4819 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631646363326536363139616537313539623538366430323433646333 Jan 21 23:39:40.616000 audit: BPF prog-id=203 op=LOAD Jan 21 23:39:40.616000 audit[4830]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4819 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631646363326536363139616537313539623538366430323433646333 Jan 21 23:39:40.693259 kubelet[3545]: I0121 23:39:40.693217 3545 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0a63e37-e85d-4a24-a001-b2e5766cb244" path="/var/lib/kubelet/pods/b0a63e37-e85d-4a24-a001-b2e5766cb244/volumes" Jan 21 23:39:40.789000 audit: BPF prog-id=204 op=LOAD Jan 21 23:39:40.789000 audit[4894]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc8b54448 a2=98 a3=ffffc8b54438 items=0 ppid=4727 pid=4894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.789000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 23:39:40.790000 audit: BPF prog-id=204 op=UNLOAD Jan 21 23:39:40.790000 audit[4894]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc8b54418 a3=0 items=0 ppid=4727 pid=4894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.790000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 23:39:40.790000 audit: BPF prog-id=205 op=LOAD Jan 21 23:39:40.790000 audit[4894]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc8b542f8 a2=74 a3=95 items=0 ppid=4727 pid=4894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.790000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 23:39:40.790000 audit: BPF prog-id=205 op=UNLOAD Jan 21 23:39:40.790000 audit[4894]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4727 pid=4894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.790000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 23:39:40.790000 audit: BPF prog-id=206 op=LOAD Jan 21 23:39:40.790000 audit[4894]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc8b54328 a2=40 a3=ffffc8b54358 items=0 ppid=4727 pid=4894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.790000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 23:39:40.790000 audit: BPF prog-id=206 op=UNLOAD Jan 21 23:39:40.790000 audit[4894]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffc8b54358 items=0 ppid=4727 pid=4894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.790000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 23:39:40.791000 audit: BPF prog-id=207 op=LOAD Jan 21 23:39:40.791000 audit[4895]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe05c5a58 a2=98 a3=ffffe05c5a48 items=0 ppid=4727 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.791000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:39:40.791000 audit: BPF prog-id=207 op=UNLOAD Jan 21 23:39:40.791000 audit[4895]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe05c5a28 a3=0 items=0 ppid=4727 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.791000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:39:40.791000 audit: BPF prog-id=208 op=LOAD Jan 21 23:39:40.791000 audit[4895]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe05c56e8 a2=74 a3=95 items=0 ppid=4727 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.791000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:39:40.791000 audit: BPF prog-id=208 op=UNLOAD Jan 21 23:39:40.791000 audit[4895]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4727 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.791000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:39:40.791000 audit: BPF prog-id=209 op=LOAD Jan 21 23:39:40.791000 audit[4895]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe05c5748 a2=94 a3=2 items=0 ppid=4727 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.791000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:39:40.791000 audit: BPF prog-id=209 op=UNLOAD Jan 21 23:39:40.791000 audit[4895]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4727 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.791000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:39:40.878000 audit: BPF prog-id=210 op=LOAD Jan 21 23:39:40.878000 audit[4895]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe05c5708 a2=40 a3=ffffe05c5738 items=0 ppid=4727 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.878000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:39:40.878000 audit: BPF prog-id=210 op=UNLOAD Jan 21 23:39:40.878000 audit[4895]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffe05c5738 items=0 ppid=4727 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.878000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:39:40.885000 audit: BPF prog-id=211 op=LOAD Jan 21 23:39:40.885000 audit[4895]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe05c5718 a2=94 a3=4 items=0 ppid=4727 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.885000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:39:40.885000 audit: BPF prog-id=211 op=UNLOAD Jan 21 23:39:40.885000 audit[4895]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4727 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.885000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:39:40.885000 audit: BPF prog-id=212 op=LOAD Jan 21 23:39:40.885000 audit[4895]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe05c5558 a2=94 a3=5 items=0 ppid=4727 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.885000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:39:40.886000 audit: BPF prog-id=212 op=UNLOAD Jan 21 23:39:40.886000 audit[4895]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4727 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:39:40.886000 audit: BPF prog-id=213 op=LOAD Jan 21 23:39:40.886000 audit[4895]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe05c5788 a2=94 a3=6 items=0 ppid=4727 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:39:40.886000 audit: BPF prog-id=213 op=UNLOAD Jan 21 23:39:40.886000 audit[4895]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4727 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:39:40.887000 audit: BPF prog-id=214 op=LOAD Jan 21 23:39:40.887000 audit[4895]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe05c4f58 a2=94 a3=83 items=0 ppid=4727 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.887000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:39:40.887000 audit: BPF prog-id=215 op=LOAD Jan 21 23:39:40.887000 audit[4895]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe05c4d18 a2=94 a3=2 items=0 ppid=4727 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.887000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:39:40.887000 audit: BPF prog-id=215 op=UNLOAD Jan 21 23:39:40.887000 audit[4895]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4727 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.887000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:39:40.888000 audit: BPF prog-id=214 op=UNLOAD Jan 21 23:39:40.888000 audit[4895]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=35e6e620 a3=35e61b00 items=0 ppid=4727 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.888000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:39:40.898000 audit: BPF prog-id=216 op=LOAD Jan 21 23:39:40.898000 audit[4898]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff56e58c8 a2=98 a3=fffff56e58b8 items=0 ppid=4727 pid=4898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.898000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 23:39:40.898000 audit: BPF prog-id=216 op=UNLOAD Jan 21 23:39:40.898000 audit[4898]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff56e5898 a3=0 items=0 ppid=4727 pid=4898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.898000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 23:39:40.898000 audit: BPF prog-id=217 op=LOAD Jan 21 23:39:40.898000 audit[4898]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff56e5778 a2=74 a3=95 items=0 ppid=4727 pid=4898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.898000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 23:39:40.898000 audit: BPF prog-id=217 op=UNLOAD Jan 21 23:39:40.898000 audit[4898]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4727 pid=4898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.898000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 23:39:40.898000 audit: BPF prog-id=218 op=LOAD Jan 21 23:39:40.898000 audit[4898]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff56e57a8 a2=40 a3=fffff56e57d8 items=0 ppid=4727 pid=4898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.898000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 23:39:40.898000 audit: BPF prog-id=218 op=UNLOAD Jan 21 23:39:40.898000 audit[4898]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff56e57d8 items=0 ppid=4727 pid=4898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:40.898000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 23:39:41.010645 containerd[2113]: time="2026-01-21T23:39:41.010605488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d9759fcd4-9gjqv,Uid:fd7f2311-8da9-446b-ab8a-7da03038d65b,Namespace:calico-system,Attempt:0,} returns sandbox id \"61dcc2e6619ae7159b586d0243dc3c8987f7958480fcfb5344efc01620b7c7cf\"" Jan 21 23:39:41.012387 containerd[2113]: time="2026-01-21T23:39:41.012360237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 23:39:41.225160 systemd-networkd[1703]: vxlan.calico: Link UP Jan 21 23:39:41.225167 systemd-networkd[1703]: vxlan.calico: Gained carrier Jan 21 23:39:41.240000 audit: BPF prog-id=219 op=LOAD Jan 21 23:39:41.240000 audit[4926]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc050ae38 a2=98 a3=ffffc050ae28 items=0 ppid=4727 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.240000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:39:41.240000 audit: BPF prog-id=219 op=UNLOAD Jan 21 23:39:41.240000 audit[4926]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc050ae08 a3=0 items=0 ppid=4727 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.240000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:39:41.240000 audit: BPF prog-id=220 op=LOAD Jan 21 23:39:41.240000 audit[4926]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc050ab18 a2=74 a3=95 items=0 ppid=4727 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.240000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:39:41.240000 audit: BPF prog-id=220 op=UNLOAD Jan 21 23:39:41.240000 audit[4926]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4727 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.240000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:39:41.240000 audit: BPF prog-id=221 op=LOAD Jan 21 23:39:41.240000 audit[4926]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc050ab78 a2=94 a3=2 items=0 ppid=4727 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.240000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:39:41.240000 audit: BPF prog-id=221 op=UNLOAD Jan 21 23:39:41.240000 audit[4926]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4727 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.240000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:39:41.240000 audit: BPF prog-id=222 op=LOAD Jan 21 23:39:41.240000 audit[4926]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc050a9f8 a2=40 a3=ffffc050aa28 items=0 ppid=4727 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.240000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:39:41.242000 audit: BPF prog-id=222 op=UNLOAD Jan 21 23:39:41.242000 audit[4926]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffc050aa28 items=0 ppid=4727 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.242000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:39:41.242000 audit: BPF prog-id=223 op=LOAD Jan 21 23:39:41.242000 audit[4926]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc050ab48 a2=94 a3=b7 items=0 ppid=4727 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.242000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:39:41.242000 audit: BPF prog-id=223 op=UNLOAD Jan 21 23:39:41.242000 audit[4926]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4727 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.242000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:39:41.242000 audit: BPF prog-id=224 op=LOAD Jan 21 23:39:41.242000 audit[4926]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc050a1f8 a2=94 a3=2 items=0 ppid=4727 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.242000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:39:41.242000 audit: BPF prog-id=224 op=UNLOAD Jan 21 23:39:41.242000 audit[4926]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4727 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.242000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:39:41.242000 audit: BPF prog-id=225 op=LOAD Jan 21 23:39:41.242000 audit[4926]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc050a388 a2=94 a3=30 items=0 ppid=4727 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.242000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:39:41.249000 audit: BPF prog-id=226 op=LOAD Jan 21 23:39:41.249000 audit[4931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe191c7f8 a2=98 a3=ffffe191c7e8 items=0 ppid=4727 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.249000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:39:41.250000 audit: BPF prog-id=226 op=UNLOAD Jan 21 23:39:41.250000 audit[4931]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe191c7c8 a3=0 items=0 ppid=4727 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.250000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:39:41.250000 audit: BPF prog-id=227 op=LOAD Jan 21 23:39:41.250000 audit[4931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe191c488 a2=74 a3=95 items=0 ppid=4727 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.250000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:39:41.251000 audit: BPF prog-id=227 op=UNLOAD Jan 21 23:39:41.251000 audit[4931]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4727 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.251000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:39:41.251000 audit: BPF prog-id=228 op=LOAD Jan 21 23:39:41.251000 audit[4931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe191c4e8 a2=94 a3=2 items=0 ppid=4727 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.251000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:39:41.251000 audit: BPF prog-id=228 op=UNLOAD Jan 21 23:39:41.251000 audit[4931]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4727 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.251000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:39:41.331000 audit: BPF prog-id=229 op=LOAD Jan 21 23:39:41.331000 audit[4931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe191c4a8 a2=40 a3=ffffe191c4d8 items=0 ppid=4727 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.331000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:39:41.331000 audit: BPF prog-id=229 op=UNLOAD Jan 21 23:39:41.331000 audit[4931]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffe191c4d8 items=0 ppid=4727 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.331000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:39:41.338000 audit: BPF prog-id=230 op=LOAD Jan 21 23:39:41.338000 audit[4931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe191c4b8 a2=94 a3=4 items=0 ppid=4727 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.338000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:39:41.339000 audit: BPF prog-id=230 op=UNLOAD Jan 21 23:39:41.339000 audit[4931]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4727 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.339000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:39:41.339000 audit: BPF prog-id=231 op=LOAD Jan 21 23:39:41.339000 audit[4931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe191c2f8 a2=94 a3=5 items=0 ppid=4727 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.339000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:39:41.339000 audit: BPF prog-id=231 op=UNLOAD Jan 21 23:39:41.339000 audit[4931]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4727 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.339000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:39:41.339000 audit: BPF prog-id=232 op=LOAD Jan 21 23:39:41.339000 audit[4931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe191c528 a2=94 a3=6 items=0 ppid=4727 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.339000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:39:41.339000 audit: BPF prog-id=232 op=UNLOAD Jan 21 23:39:41.339000 audit[4931]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4727 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.339000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:39:41.339000 audit: BPF prog-id=233 op=LOAD Jan 21 23:39:41.339000 audit[4931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe191bcf8 a2=94 a3=83 items=0 ppid=4727 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.339000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:39:41.339000 audit: BPF prog-id=234 op=LOAD Jan 21 23:39:41.339000 audit[4931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe191bab8 a2=94 a3=2 items=0 ppid=4727 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.339000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:39:41.339000 audit: BPF prog-id=234 op=UNLOAD Jan 21 23:39:41.339000 audit[4931]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4727 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.339000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:39:41.340000 audit: BPF prog-id=233 op=UNLOAD Jan 21 23:39:41.340000 audit[4931]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=1796c620 a3=1795fb00 items=0 ppid=4727 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.340000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:39:41.351000 audit: BPF prog-id=225 op=UNLOAD Jan 21 23:39:41.351000 audit[4727]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=40008c0200 a2=0 a3=0 items=0 ppid=4721 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.351000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 21 23:39:41.463000 audit[4958]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=4958 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:39:41.463000 audit[4958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffc6fe4e90 a2=0 a3=ffffa28ebfa8 items=0 ppid=4727 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.464000 audit[4957]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=4957 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:39:41.464000 audit[4957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffddc7e2f0 a2=0 a3=ffff8450afa8 items=0 ppid=4727 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.464000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:39:41.463000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:39:41.470000 audit[4956]: NETFILTER_CFG table=raw:124 family=2 entries=21 op=nft_register_chain pid=4956 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:39:41.470000 audit[4956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=fffff7097310 a2=0 a3=ffff9bd8efa8 items=0 ppid=4727 pid=4956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.470000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:39:41.479227 containerd[2113]: time="2026-01-21T23:39:41.479099831Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:39:41.483295 containerd[2113]: time="2026-01-21T23:39:41.483228294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 23:39:41.483413 containerd[2113]: time="2026-01-21T23:39:41.483265879Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 23:39:41.484201 kubelet[3545]: E0121 23:39:41.484142 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 23:39:41.484425 kubelet[3545]: E0121 23:39:41.484279 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 23:39:41.489807 kubelet[3545]: E0121 23:39:41.489461 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:34f3459d8c144933a66ddd93a201138a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xjhg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d9759fcd4-9gjqv_calico-system(fd7f2311-8da9-446b-ab8a-7da03038d65b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 23:39:41.491409 containerd[2113]: time="2026-01-21T23:39:41.491374601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 23:39:41.475000 audit[4962]: NETFILTER_CFG table=filter:125 family=2 entries=94 op=nft_register_chain pid=4962 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:39:41.475000 audit[4962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=fffff2a72fe0 a2=0 a3=ffff81d96fa8 items=0 ppid=4727 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.475000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:39:41.755234 containerd[2113]: time="2026-01-21T23:39:41.755032282Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:39:41.756210 systemd-networkd[1703]: cali42a0fa90703: Gained IPv6LL Jan 21 23:39:41.758010 containerd[2113]: time="2026-01-21T23:39:41.757882525Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 23:39:41.758010 containerd[2113]: time="2026-01-21T23:39:41.757911366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 23:39:41.758182 kubelet[3545]: E0121 23:39:41.758138 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 23:39:41.758438 kubelet[3545]: E0121 23:39:41.758185 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 23:39:41.758471 kubelet[3545]: E0121 23:39:41.758296 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xjhg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d9759fcd4-9gjqv_calico-system(fd7f2311-8da9-446b-ab8a-7da03038d65b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 23:39:41.759791 kubelet[3545]: E0121 23:39:41.759725 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d9759fcd4-9gjqv" podUID="fd7f2311-8da9-446b-ab8a-7da03038d65b" Jan 21 23:39:41.924170 kubelet[3545]: E0121 23:39:41.924117 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d9759fcd4-9gjqv" podUID="fd7f2311-8da9-446b-ab8a-7da03038d65b" Jan 21 23:39:41.946000 audit[4971]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=4971 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:41.946000 audit[4971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff5c14e10 a2=0 a3=1 items=0 ppid=3761 pid=4971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.946000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:41.952000 audit[4971]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=4971 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:41.952000 audit[4971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffff5c14e10 a2=0 a3=1 items=0 ppid=3761 pid=4971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:41.952000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:42.461251 systemd-networkd[1703]: vxlan.calico: Gained IPv6LL Jan 21 23:39:42.925969 kubelet[3545]: E0121 23:39:42.925526 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d9759fcd4-9gjqv" podUID="fd7f2311-8da9-446b-ab8a-7da03038d65b" Jan 21 23:39:47.690368 containerd[2113]: time="2026-01-21T23:39:47.690273634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84459bb977-fzglc,Uid:b1e000bd-2ebe-4f78-af48-a2456035e42f,Namespace:calico-system,Attempt:0,}" Jan 21 23:39:47.691123 containerd[2113]: time="2026-01-21T23:39:47.690322348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5h2h6,Uid:6e0b4320-3f37-40f4-a5ce-f14aca373850,Namespace:kube-system,Attempt:0,}" Jan 21 23:39:47.691123 containerd[2113]: time="2026-01-21T23:39:47.690345621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-btpgn,Uid:7707a694-44de-42a3-8dd7-f5d1565db734,Namespace:kube-system,Attempt:0,}" Jan 21 23:39:47.849512 systemd-networkd[1703]: cali028f81e5b84: Link UP Jan 21 23:39:47.850919 systemd-networkd[1703]: cali028f81e5b84: Gained carrier Jan 21 23:39:47.865872 containerd[2113]: 2026-01-21 23:39:47.773 [INFO][4983] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--a0ba06055b-k8s-calico--kube--controllers--84459bb977--fzglc-eth0 calico-kube-controllers-84459bb977- calico-system b1e000bd-2ebe-4f78-af48-a2456035e42f 808 0 2026-01-21 23:39:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84459bb977 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515.1.0-n-a0ba06055b calico-kube-controllers-84459bb977-fzglc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali028f81e5b84 [] [] }} ContainerID="d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c" Namespace="calico-system" Pod="calico-kube-controllers-84459bb977-fzglc" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--kube--controllers--84459bb977--fzglc-" Jan 21 23:39:47.865872 containerd[2113]: 2026-01-21 23:39:47.776 [INFO][4983] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c" Namespace="calico-system" Pod="calico-kube-controllers-84459bb977-fzglc" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--kube--controllers--84459bb977--fzglc-eth0" Jan 21 23:39:47.865872 containerd[2113]: 2026-01-21 23:39:47.805 [INFO][5023] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c" HandleID="k8s-pod-network.d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c" Workload="ci--4515.1.0--n--a0ba06055b-k8s-calico--kube--controllers--84459bb977--fzglc-eth0" Jan 21 23:39:47.866172 containerd[2113]: 2026-01-21 23:39:47.806 [INFO][5023] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c" HandleID="k8s-pod-network.d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c" Workload="ci--4515.1.0--n--a0ba06055b-k8s-calico--kube--controllers--84459bb977--fzglc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-n-a0ba06055b", "pod":"calico-kube-controllers-84459bb977-fzglc", "timestamp":"2026-01-21 23:39:47.805704241 +0000 UTC"}, Hostname:"ci-4515.1.0-n-a0ba06055b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 23:39:47.866172 containerd[2113]: 2026-01-21 23:39:47.806 [INFO][5023] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 23:39:47.866172 containerd[2113]: 2026-01-21 23:39:47.806 [INFO][5023] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 23:39:47.866172 containerd[2113]: 2026-01-21 23:39:47.806 [INFO][5023] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-a0ba06055b' Jan 21 23:39:47.866172 containerd[2113]: 2026-01-21 23:39:47.815 [INFO][5023] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:47.866172 containerd[2113]: 2026-01-21 23:39:47.819 [INFO][5023] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:47.866172 containerd[2113]: 2026-01-21 23:39:47.824 [INFO][5023] ipam/ipam.go 511: Trying affinity for 192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:47.866172 containerd[2113]: 2026-01-21 23:39:47.825 [INFO][5023] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:47.866172 containerd[2113]: 2026-01-21 23:39:47.827 [INFO][5023] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:47.866614 containerd[2113]: 2026-01-21 23:39:47.827 [INFO][5023] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.64/26 handle="k8s-pod-network.d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:47.866614 containerd[2113]: 2026-01-21 23:39:47.829 [INFO][5023] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c Jan 21 23:39:47.866614 containerd[2113]: 2026-01-21 23:39:47.833 [INFO][5023] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.64/26 handle="k8s-pod-network.d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:47.866614 containerd[2113]: 2026-01-21 23:39:47.841 [INFO][5023] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.66/26] block=192.168.73.64/26 handle="k8s-pod-network.d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:47.866614 containerd[2113]: 2026-01-21 23:39:47.841 [INFO][5023] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.66/26] handle="k8s-pod-network.d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:47.866614 containerd[2113]: 2026-01-21 23:39:47.841 [INFO][5023] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 23:39:47.866614 containerd[2113]: 2026-01-21 23:39:47.841 [INFO][5023] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.66/26] IPv6=[] ContainerID="d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c" HandleID="k8s-pod-network.d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c" Workload="ci--4515.1.0--n--a0ba06055b-k8s-calico--kube--controllers--84459bb977--fzglc-eth0" Jan 21 23:39:47.868098 containerd[2113]: 2026-01-21 23:39:47.844 [INFO][4983] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c" Namespace="calico-system" Pod="calico-kube-controllers-84459bb977-fzglc" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--kube--controllers--84459bb977--fzglc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--a0ba06055b-k8s-calico--kube--controllers--84459bb977--fzglc-eth0", GenerateName:"calico-kube-controllers-84459bb977-", Namespace:"calico-system", SelfLink:"", UID:"b1e000bd-2ebe-4f78-af48-a2456035e42f", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 39, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84459bb977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-a0ba06055b", ContainerID:"", Pod:"calico-kube-controllers-84459bb977-fzglc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali028f81e5b84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:39:47.868154 containerd[2113]: 2026-01-21 23:39:47.844 [INFO][4983] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.66/32] ContainerID="d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c" Namespace="calico-system" Pod="calico-kube-controllers-84459bb977-fzglc" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--kube--controllers--84459bb977--fzglc-eth0" Jan 21 23:39:47.868154 containerd[2113]: 2026-01-21 23:39:47.844 [INFO][4983] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali028f81e5b84 ContainerID="d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c" Namespace="calico-system" Pod="calico-kube-controllers-84459bb977-fzglc" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--kube--controllers--84459bb977--fzglc-eth0" Jan 21 23:39:47.868154 containerd[2113]: 2026-01-21 23:39:47.851 [INFO][4983] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c" Namespace="calico-system" Pod="calico-kube-controllers-84459bb977-fzglc" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--kube--controllers--84459bb977--fzglc-eth0" Jan 21 23:39:47.868202 containerd[2113]: 2026-01-21 23:39:47.851 [INFO][4983] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c" Namespace="calico-system" Pod="calico-kube-controllers-84459bb977-fzglc" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--kube--controllers--84459bb977--fzglc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--a0ba06055b-k8s-calico--kube--controllers--84459bb977--fzglc-eth0", GenerateName:"calico-kube-controllers-84459bb977-", Namespace:"calico-system", SelfLink:"", UID:"b1e000bd-2ebe-4f78-af48-a2456035e42f", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 39, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84459bb977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-a0ba06055b", ContainerID:"d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c", Pod:"calico-kube-controllers-84459bb977-fzglc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali028f81e5b84", MAC:"96:e2:d5:b4:2a:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:39:47.868236 containerd[2113]: 2026-01-21 23:39:47.863 [INFO][4983] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c" Namespace="calico-system" Pod="calico-kube-controllers-84459bb977-fzglc" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--kube--controllers--84459bb977--fzglc-eth0" Jan 21 23:39:47.875000 audit[5048]: NETFILTER_CFG table=filter:128 family=2 entries=36 op=nft_register_chain pid=5048 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:39:47.879859 kernel: kauditd_printk_skb: 237 callbacks suppressed Jan 21 23:39:47.879942 kernel: audit: type=1325 audit(1769038787.875:668): table=filter:128 family=2 entries=36 op=nft_register_chain pid=5048 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:39:47.875000 audit[5048]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=ffffce37c780 a2=0 a3=ffff80ff9fa8 items=0 ppid=4727 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:47.909063 kernel: audit: type=1300 audit(1769038787.875:668): arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=ffffce37c780 a2=0 a3=ffff80ff9fa8 items=0 ppid=4727 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:47.911488 containerd[2113]: time="2026-01-21T23:39:47.911109348Z" level=info msg="connecting to shim d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c" address="unix:///run/containerd/s/c7bdf6b775f7dedd5a337189f9fe16fa4edc54a6c499274d56a81d8e58a4a40e" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:39:47.875000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:39:47.924232 kernel: audit: type=1327 audit(1769038787.875:668): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:39:47.947246 systemd[1]: Started cri-containerd-d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c.scope - libcontainer container d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c. Jan 21 23:39:47.970160 systemd-networkd[1703]: calib01bb7c92ab: Link UP Jan 21 23:39:47.971221 systemd-networkd[1703]: calib01bb7c92ab: Gained carrier Jan 21 23:39:47.987000 audit: BPF prog-id=235 op=LOAD Jan 21 23:39:47.990648 containerd[2113]: 2026-01-21 23:39:47.778 [INFO][4998] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--btpgn-eth0 coredns-674b8bbfcf- kube-system 7707a694-44de-42a3-8dd7-f5d1565db734 813 0 2026-01-21 23:39:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-n-a0ba06055b coredns-674b8bbfcf-btpgn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib01bb7c92ab [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594" Namespace="kube-system" Pod="coredns-674b8bbfcf-btpgn" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--btpgn-" Jan 21 23:39:47.990648 containerd[2113]: 2026-01-21 23:39:47.779 [INFO][4998] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594" Namespace="kube-system" Pod="coredns-674b8bbfcf-btpgn" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--btpgn-eth0" Jan 21 23:39:47.990648 containerd[2113]: 2026-01-21 23:39:47.818 [INFO][5020] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594" HandleID="k8s-pod-network.2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594" Workload="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--btpgn-eth0" Jan 21 23:39:47.990990 containerd[2113]: 2026-01-21 23:39:47.818 [INFO][5020] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594" HandleID="k8s-pod-network.2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594" Workload="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--btpgn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-n-a0ba06055b", "pod":"coredns-674b8bbfcf-btpgn", "timestamp":"2026-01-21 23:39:47.818345288 +0000 UTC"}, Hostname:"ci-4515.1.0-n-a0ba06055b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 23:39:47.990990 containerd[2113]: 2026-01-21 23:39:47.818 [INFO][5020] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 23:39:47.990990 containerd[2113]: 2026-01-21 23:39:47.841 [INFO][5020] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 23:39:47.990990 containerd[2113]: 2026-01-21 23:39:47.841 [INFO][5020] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-a0ba06055b' Jan 21 23:39:47.990990 containerd[2113]: 2026-01-21 23:39:47.925 [INFO][5020] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:47.990990 containerd[2113]: 2026-01-21 23:39:47.934 [INFO][5020] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:47.990990 containerd[2113]: 2026-01-21 23:39:47.938 [INFO][5020] ipam/ipam.go 511: Trying affinity for 192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:47.990990 containerd[2113]: 2026-01-21 23:39:47.942 [INFO][5020] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:47.990990 containerd[2113]: 2026-01-21 23:39:47.945 [INFO][5020] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:47.991191 containerd[2113]: 2026-01-21 23:39:47.945 [INFO][5020] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.64/26 handle="k8s-pod-network.2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:47.991191 containerd[2113]: 2026-01-21 23:39:47.946 [INFO][5020] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594 Jan 21 23:39:47.991191 containerd[2113]: 2026-01-21 23:39:47.953 [INFO][5020] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.64/26 handle="k8s-pod-network.2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:47.991191 containerd[2113]: 2026-01-21 23:39:47.959 [INFO][5020] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.67/26] block=192.168.73.64/26 handle="k8s-pod-network.2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:47.991191 containerd[2113]: 2026-01-21 23:39:47.960 [INFO][5020] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.67/26] handle="k8s-pod-network.2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:47.991191 containerd[2113]: 2026-01-21 23:39:47.960 [INFO][5020] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 23:39:47.991191 containerd[2113]: 2026-01-21 23:39:47.960 [INFO][5020] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.67/26] IPv6=[] ContainerID="2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594" HandleID="k8s-pod-network.2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594" Workload="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--btpgn-eth0" Jan 21 23:39:47.991304 containerd[2113]: 2026-01-21 23:39:47.963 [INFO][4998] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594" Namespace="kube-system" Pod="coredns-674b8bbfcf-btpgn" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--btpgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--btpgn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7707a694-44de-42a3-8dd7-f5d1565db734", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 39, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-a0ba06055b", ContainerID:"", Pod:"coredns-674b8bbfcf-btpgn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib01bb7c92ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:39:47.991304 containerd[2113]: 2026-01-21 23:39:47.964 [INFO][4998] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.67/32] ContainerID="2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594" Namespace="kube-system" Pod="coredns-674b8bbfcf-btpgn" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--btpgn-eth0" Jan 21 23:39:47.991304 containerd[2113]: 2026-01-21 23:39:47.964 [INFO][4998] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib01bb7c92ab ContainerID="2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594" Namespace="kube-system" Pod="coredns-674b8bbfcf-btpgn" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--btpgn-eth0" Jan 21 23:39:47.991304 containerd[2113]: 2026-01-21 23:39:47.974 [INFO][4998] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594" Namespace="kube-system" Pod="coredns-674b8bbfcf-btpgn" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--btpgn-eth0" Jan 21 23:39:47.991304 containerd[2113]: 2026-01-21 23:39:47.975 [INFO][4998] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594" Namespace="kube-system" Pod="coredns-674b8bbfcf-btpgn" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--btpgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--btpgn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7707a694-44de-42a3-8dd7-f5d1565db734", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 39, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-a0ba06055b", ContainerID:"2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594", Pod:"coredns-674b8bbfcf-btpgn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib01bb7c92ab", MAC:"1a:c1:fb:15:fd:49", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:39:47.991304 containerd[2113]: 2026-01-21 23:39:47.987 [INFO][4998] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594" Namespace="kube-system" Pod="coredns-674b8bbfcf-btpgn" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--btpgn-eth0" Jan 21 23:39:48.000070 kernel: audit: type=1334 audit(1769038787.987:669): prog-id=235 op=LOAD Jan 21 23:39:48.000166 kernel: audit: type=1334 audit(1769038787.993:670): prog-id=236 op=LOAD Jan 21 23:39:48.000183 kernel: audit: type=1300 audit(1769038787.993:670): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5057 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:47.993000 audit: BPF prog-id=236 op=LOAD Jan 21 23:39:47.993000 audit[5067]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5057 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:47.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353332303539383562646632326134623539626434373737633862 Jan 21 23:39:48.039286 kernel: audit: type=1327 audit(1769038787.993:670): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353332303539383562646632326134623539626434373737633862 Jan 21 23:39:47.993000 audit: BPF prog-id=236 op=UNLOAD Jan 21 23:39:48.045835 kernel: audit: type=1334 audit(1769038787.993:671): prog-id=236 op=UNLOAD Jan 21 23:39:47.993000 audit[5067]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5057 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.065653 kernel: audit: type=1300 audit(1769038787.993:671): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5057 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:47.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353332303539383562646632326134623539626434373737633862 Jan 21 23:39:48.085202 kernel: audit: type=1327 audit(1769038787.993:671): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353332303539383562646632326134623539626434373737633862 Jan 21 23:39:47.993000 audit: BPF prog-id=237 op=LOAD Jan 21 23:39:47.993000 audit[5067]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5057 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:47.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353332303539383562646632326134623539626434373737633862 Jan 21 23:39:47.993000 audit: BPF prog-id=238 op=LOAD Jan 21 23:39:47.993000 audit[5067]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5057 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:47.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353332303539383562646632326134623539626434373737633862 Jan 21 23:39:48.018000 audit: BPF prog-id=238 op=UNLOAD Jan 21 23:39:48.018000 audit[5067]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5057 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353332303539383562646632326134623539626434373737633862 Jan 21 23:39:48.018000 audit: BPF prog-id=237 op=UNLOAD Jan 21 23:39:48.018000 audit[5067]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5057 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353332303539383562646632326134623539626434373737633862 Jan 21 23:39:48.018000 audit: BPF prog-id=239 op=LOAD Jan 21 23:39:48.018000 audit[5067]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5057 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353332303539383562646632326134623539626434373737633862 Jan 21 23:39:48.090000 audit[5098]: NETFILTER_CFG table=filter:129 family=2 entries=46 op=nft_register_chain pid=5098 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:39:48.090000 audit[5098]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23740 a0=3 a1=ffffe9e42610 a2=0 a3=ffffb289ffa8 items=0 ppid=4727 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.090000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:39:48.118707 containerd[2113]: time="2026-01-21T23:39:48.118585293Z" level=info msg="connecting to shim 2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594" address="unix:///run/containerd/s/f195c39b24507b3120e27a27964d175c6232c35b756aa01499aa07210a7bc088" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:39:48.136006 systemd-networkd[1703]: cali13559461ec0: Link UP Jan 21 23:39:48.137182 systemd-networkd[1703]: cali13559461ec0: Gained carrier Jan 21 23:39:48.141639 containerd[2113]: time="2026-01-21T23:39:48.141604500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84459bb977-fzglc,Uid:b1e000bd-2ebe-4f78-af48-a2456035e42f,Namespace:calico-system,Attempt:0,} returns sandbox id \"d353205985bdf22a4b59bd4777c8b78ad93e56af869373420afbbdaa4d46395c\"" Jan 21 23:39:48.147149 containerd[2113]: time="2026-01-21T23:39:48.147106163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 23:39:48.155920 containerd[2113]: 2026-01-21 23:39:47.776 [INFO][4987] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--5h2h6-eth0 coredns-674b8bbfcf- kube-system 6e0b4320-3f37-40f4-a5ce-f14aca373850 812 0 2026-01-21 23:39:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-n-a0ba06055b coredns-674b8bbfcf-5h2h6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali13559461ec0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171" Namespace="kube-system" Pod="coredns-674b8bbfcf-5h2h6" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--5h2h6-" Jan 21 23:39:48.155920 containerd[2113]: 2026-01-21 23:39:47.776 [INFO][4987] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171" Namespace="kube-system" Pod="coredns-674b8bbfcf-5h2h6" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--5h2h6-eth0" Jan 21 23:39:48.155920 containerd[2113]: 2026-01-21 23:39:47.824 [INFO][5024] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171" HandleID="k8s-pod-network.50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171" Workload="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--5h2h6-eth0" Jan 21 23:39:48.155920 containerd[2113]: 2026-01-21 23:39:47.824 [INFO][5024] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171" HandleID="k8s-pod-network.50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171" Workload="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--5h2h6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b0d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-n-a0ba06055b", "pod":"coredns-674b8bbfcf-5h2h6", "timestamp":"2026-01-21 23:39:47.8241162 +0000 UTC"}, Hostname:"ci-4515.1.0-n-a0ba06055b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 23:39:48.155920 containerd[2113]: 2026-01-21 23:39:47.824 [INFO][5024] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 23:39:48.155920 containerd[2113]: 2026-01-21 23:39:47.961 [INFO][5024] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 23:39:48.155920 containerd[2113]: 2026-01-21 23:39:47.961 [INFO][5024] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-a0ba06055b' Jan 21 23:39:48.155920 containerd[2113]: 2026-01-21 23:39:48.030 [INFO][5024] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.155920 containerd[2113]: 2026-01-21 23:39:48.040 [INFO][5024] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.155920 containerd[2113]: 2026-01-21 23:39:48.046 [INFO][5024] ipam/ipam.go 511: Trying affinity for 192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.155920 containerd[2113]: 2026-01-21 23:39:48.088 [INFO][5024] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.155920 containerd[2113]: 2026-01-21 23:39:48.091 [INFO][5024] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.155920 containerd[2113]: 2026-01-21 23:39:48.092 [INFO][5024] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.64/26 handle="k8s-pod-network.50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.155920 containerd[2113]: 2026-01-21 23:39:48.096 [INFO][5024] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171 Jan 21 23:39:48.155920 containerd[2113]: 2026-01-21 23:39:48.103 [INFO][5024] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.64/26 handle="k8s-pod-network.50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.155920 containerd[2113]: 2026-01-21 23:39:48.116 [INFO][5024] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.68/26] block=192.168.73.64/26 handle="k8s-pod-network.50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.155920 containerd[2113]: 2026-01-21 23:39:48.116 [INFO][5024] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.68/26] handle="k8s-pod-network.50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.155920 containerd[2113]: 2026-01-21 23:39:48.118 [INFO][5024] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 23:39:48.155920 containerd[2113]: 2026-01-21 23:39:48.118 [INFO][5024] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.68/26] IPv6=[] ContainerID="50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171" HandleID="k8s-pod-network.50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171" Workload="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--5h2h6-eth0" Jan 21 23:39:48.156700 containerd[2113]: 2026-01-21 23:39:48.121 [INFO][4987] cni-plugin/k8s.go 418: Populated endpoint ContainerID="50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171" Namespace="kube-system" Pod="coredns-674b8bbfcf-5h2h6" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--5h2h6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--5h2h6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6e0b4320-3f37-40f4-a5ce-f14aca373850", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 39, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-a0ba06055b", ContainerID:"", Pod:"coredns-674b8bbfcf-5h2h6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali13559461ec0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:39:48.156700 containerd[2113]: 2026-01-21 23:39:48.121 [INFO][4987] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.68/32] ContainerID="50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171" Namespace="kube-system" Pod="coredns-674b8bbfcf-5h2h6" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--5h2h6-eth0" Jan 21 23:39:48.156700 containerd[2113]: 2026-01-21 23:39:48.121 [INFO][4987] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13559461ec0 ContainerID="50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171" Namespace="kube-system" Pod="coredns-674b8bbfcf-5h2h6" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--5h2h6-eth0" Jan 21 23:39:48.156700 containerd[2113]: 2026-01-21 23:39:48.137 [INFO][4987] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171" Namespace="kube-system" Pod="coredns-674b8bbfcf-5h2h6" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--5h2h6-eth0" Jan 21 23:39:48.156700 containerd[2113]: 2026-01-21 23:39:48.138 [INFO][4987] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171" Namespace="kube-system" Pod="coredns-674b8bbfcf-5h2h6" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--5h2h6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--5h2h6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6e0b4320-3f37-40f4-a5ce-f14aca373850", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 39, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-a0ba06055b", ContainerID:"50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171", Pod:"coredns-674b8bbfcf-5h2h6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali13559461ec0", MAC:"6e:00:eb:8a:ce:a2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:39:48.156700 containerd[2113]: 2026-01-21 23:39:48.149 [INFO][4987] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171" Namespace="kube-system" Pod="coredns-674b8bbfcf-5h2h6" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-coredns--674b8bbfcf--5h2h6-eth0" Jan 21 23:39:48.157250 systemd[1]: Started cri-containerd-2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594.scope - libcontainer container 2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594. Jan 21 23:39:48.167000 audit: BPF prog-id=240 op=LOAD Jan 21 23:39:48.168000 audit: BPF prog-id=241 op=LOAD Jan 21 23:39:48.168000 audit[5126]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=5109 pid=5126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261383939636432396234353330616532393664643736383533613233 Jan 21 23:39:48.168000 audit: BPF prog-id=241 op=UNLOAD Jan 21 23:39:48.168000 audit[5126]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5109 pid=5126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261383939636432396234353330616532393664643736383533613233 Jan 21 23:39:48.168000 audit: BPF prog-id=242 op=LOAD Jan 21 23:39:48.168000 audit[5126]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=5109 pid=5126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261383939636432396234353330616532393664643736383533613233 Jan 21 23:39:48.168000 audit: BPF prog-id=243 op=LOAD Jan 21 23:39:48.168000 audit[5126]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=5109 pid=5126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261383939636432396234353330616532393664643736383533613233 Jan 21 23:39:48.168000 audit: BPF prog-id=243 op=UNLOAD Jan 21 23:39:48.168000 audit[5126]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5109 pid=5126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261383939636432396234353330616532393664643736383533613233 Jan 21 23:39:48.168000 audit: BPF prog-id=242 op=UNLOAD Jan 21 23:39:48.168000 audit[5126]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5109 pid=5126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261383939636432396234353330616532393664643736383533613233 Jan 21 23:39:48.169000 audit: BPF prog-id=244 op=LOAD Jan 21 23:39:48.169000 audit[5126]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=5109 pid=5126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261383939636432396234353330616532393664643736383533613233 Jan 21 23:39:48.185000 audit[5153]: NETFILTER_CFG table=filter:130 family=2 entries=40 op=nft_register_chain pid=5153 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:39:48.185000 audit[5153]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20344 a0=3 a1=fffffd814ae0 a2=0 a3=ffff9c3a3fa8 items=0 ppid=4727 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.185000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:39:48.200831 containerd[2113]: time="2026-01-21T23:39:48.200735537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-btpgn,Uid:7707a694-44de-42a3-8dd7-f5d1565db734,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594\"" Jan 21 23:39:48.204494 containerd[2113]: time="2026-01-21T23:39:48.204303140Z" level=info msg="connecting to shim 50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171" address="unix:///run/containerd/s/944883b5bf1472cfdfd53d7435fa406b042117f0f743db4dab12d51c128e717f" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:39:48.210906 containerd[2113]: time="2026-01-21T23:39:48.210879465Z" level=info msg="CreateContainer within sandbox \"2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 21 23:39:48.227282 systemd[1]: Started cri-containerd-50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171.scope - libcontainer container 50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171. Jan 21 23:39:48.232273 containerd[2113]: time="2026-01-21T23:39:48.232235574Z" level=info msg="Container 612bd368c8a6dab3edd6dbc65c958c615546a06eb8340e912fd2f4936c69942e: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:39:48.238000 audit: BPF prog-id=245 op=LOAD Jan 21 23:39:48.239000 audit: BPF prog-id=246 op=LOAD Jan 21 23:39:48.239000 audit[5181]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=5169 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530343230656562616537343536373864363839306162323464313732 Jan 21 23:39:48.239000 audit: BPF prog-id=246 op=UNLOAD Jan 21 23:39:48.239000 audit[5181]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5169 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530343230656562616537343536373864363839306162323464313732 Jan 21 23:39:48.239000 audit: BPF prog-id=247 op=LOAD Jan 21 23:39:48.239000 audit[5181]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=5169 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530343230656562616537343536373864363839306162323464313732 Jan 21 23:39:48.239000 audit: BPF prog-id=248 op=LOAD Jan 21 23:39:48.239000 audit[5181]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=5169 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530343230656562616537343536373864363839306162323464313732 Jan 21 23:39:48.239000 audit: BPF prog-id=248 op=UNLOAD Jan 21 23:39:48.239000 audit[5181]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5169 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530343230656562616537343536373864363839306162323464313732 Jan 21 23:39:48.239000 audit: BPF prog-id=247 op=UNLOAD Jan 21 23:39:48.239000 audit[5181]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5169 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530343230656562616537343536373864363839306162323464313732 Jan 21 23:39:48.239000 audit: BPF prog-id=249 op=LOAD Jan 21 23:39:48.239000 audit[5181]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=5169 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530343230656562616537343536373864363839306162323464313732 Jan 21 23:39:48.247403 containerd[2113]: time="2026-01-21T23:39:48.247365491Z" level=info msg="CreateContainer within sandbox \"2a899cd29b4530ae296dd76853a230a53d0a842a3af057bcd63d4fe68649c594\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"612bd368c8a6dab3edd6dbc65c958c615546a06eb8340e912fd2f4936c69942e\"" Jan 21 23:39:48.248109 containerd[2113]: time="2026-01-21T23:39:48.248084940Z" level=info msg="StartContainer for \"612bd368c8a6dab3edd6dbc65c958c615546a06eb8340e912fd2f4936c69942e\"" Jan 21 23:39:48.248744 containerd[2113]: time="2026-01-21T23:39:48.248716938Z" level=info msg="connecting to shim 612bd368c8a6dab3edd6dbc65c958c615546a06eb8340e912fd2f4936c69942e" address="unix:///run/containerd/s/f195c39b24507b3120e27a27964d175c6232c35b756aa01499aa07210a7bc088" protocol=ttrpc version=3 Jan 21 23:39:48.268219 containerd[2113]: time="2026-01-21T23:39:48.268180278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5h2h6,Uid:6e0b4320-3f37-40f4-a5ce-f14aca373850,Namespace:kube-system,Attempt:0,} returns sandbox id \"50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171\"" Jan 21 23:39:48.269237 systemd[1]: Started cri-containerd-612bd368c8a6dab3edd6dbc65c958c615546a06eb8340e912fd2f4936c69942e.scope - libcontainer container 612bd368c8a6dab3edd6dbc65c958c615546a06eb8340e912fd2f4936c69942e. Jan 21 23:39:48.278534 containerd[2113]: time="2026-01-21T23:39:48.278487267Z" level=info msg="CreateContainer within sandbox \"50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 21 23:39:48.281000 audit: BPF prog-id=250 op=LOAD Jan 21 23:39:48.282000 audit: BPF prog-id=251 op=LOAD Jan 21 23:39:48.282000 audit[5200]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e180 a2=98 a3=0 items=0 ppid=5109 pid=5200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631326264333638633861366461623365646436646263363563393538 Jan 21 23:39:48.282000 audit: BPF prog-id=251 op=UNLOAD Jan 21 23:39:48.282000 audit[5200]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5109 pid=5200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631326264333638633861366461623365646436646263363563393538 Jan 21 23:39:48.282000 audit: BPF prog-id=252 op=LOAD Jan 21 23:39:48.282000 audit[5200]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e3e8 a2=98 a3=0 items=0 ppid=5109 pid=5200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631326264333638633861366461623365646436646263363563393538 Jan 21 23:39:48.283000 audit: BPF prog-id=253 op=LOAD Jan 21 23:39:48.283000 audit[5200]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400017e168 a2=98 a3=0 items=0 ppid=5109 pid=5200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631326264333638633861366461623365646436646263363563393538 Jan 21 23:39:48.283000 audit: BPF prog-id=253 op=UNLOAD Jan 21 23:39:48.283000 audit[5200]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5109 pid=5200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631326264333638633861366461623365646436646263363563393538 Jan 21 23:39:48.283000 audit: BPF prog-id=252 op=UNLOAD Jan 21 23:39:48.283000 audit[5200]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5109 pid=5200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631326264333638633861366461623365646436646263363563393538 Jan 21 23:39:48.283000 audit: BPF prog-id=254 op=LOAD Jan 21 23:39:48.283000 audit[5200]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e648 a2=98 a3=0 items=0 ppid=5109 pid=5200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631326264333638633861366461623365646436646263363563393538 Jan 21 23:39:48.299556 containerd[2113]: time="2026-01-21T23:39:48.299516781Z" level=info msg="Container 8f08f5225162095ed82c373c28cf2d231cdc71908089066023d6cf7dec4898be: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:39:48.304241 containerd[2113]: time="2026-01-21T23:39:48.304201008Z" level=info msg="StartContainer for \"612bd368c8a6dab3edd6dbc65c958c615546a06eb8340e912fd2f4936c69942e\" returns successfully" Jan 21 23:39:48.318388 containerd[2113]: time="2026-01-21T23:39:48.318339491Z" level=info msg="CreateContainer within sandbox \"50420eebae745678d6890ab24d1725d03ca25108ac8e16a6089ffb507b2b6171\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f08f5225162095ed82c373c28cf2d231cdc71908089066023d6cf7dec4898be\"" Jan 21 23:39:48.319886 containerd[2113]: time="2026-01-21T23:39:48.319851855Z" level=info msg="StartContainer for \"8f08f5225162095ed82c373c28cf2d231cdc71908089066023d6cf7dec4898be\"" Jan 21 23:39:48.321734 containerd[2113]: time="2026-01-21T23:39:48.321038096Z" level=info msg="connecting to shim 8f08f5225162095ed82c373c28cf2d231cdc71908089066023d6cf7dec4898be" address="unix:///run/containerd/s/944883b5bf1472cfdfd53d7435fa406b042117f0f743db4dab12d51c128e717f" protocol=ttrpc version=3 Jan 21 23:39:48.345255 systemd[1]: Started cri-containerd-8f08f5225162095ed82c373c28cf2d231cdc71908089066023d6cf7dec4898be.scope - libcontainer container 8f08f5225162095ed82c373c28cf2d231cdc71908089066023d6cf7dec4898be. Jan 21 23:39:48.354000 audit: BPF prog-id=255 op=LOAD Jan 21 23:39:48.355000 audit: BPF prog-id=256 op=LOAD Jan 21 23:39:48.355000 audit[5237]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5169 pid=5237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303866353232353136323039356564383263333733633238636632 Jan 21 23:39:48.355000 audit: BPF prog-id=256 op=UNLOAD Jan 21 23:39:48.355000 audit[5237]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5169 pid=5237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303866353232353136323039356564383263333733633238636632 Jan 21 23:39:48.355000 audit: BPF prog-id=257 op=LOAD Jan 21 23:39:48.355000 audit[5237]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5169 pid=5237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303866353232353136323039356564383263333733633238636632 Jan 21 23:39:48.355000 audit: BPF prog-id=258 op=LOAD Jan 21 23:39:48.355000 audit[5237]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=5169 pid=5237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303866353232353136323039356564383263333733633238636632 Jan 21 23:39:48.355000 audit: BPF prog-id=258 op=UNLOAD Jan 21 23:39:48.355000 audit[5237]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5169 pid=5237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303866353232353136323039356564383263333733633238636632 Jan 21 23:39:48.355000 audit: BPF prog-id=257 op=UNLOAD Jan 21 23:39:48.355000 audit[5237]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5169 pid=5237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303866353232353136323039356564383263333733633238636632 Jan 21 23:39:48.356000 audit: BPF prog-id=259 op=LOAD Jan 21 23:39:48.356000 audit[5237]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=5169 pid=5237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303866353232353136323039356564383263333733633238636632 Jan 21 23:39:48.376079 containerd[2113]: time="2026-01-21T23:39:48.375854423Z" level=info msg="StartContainer for \"8f08f5225162095ed82c373c28cf2d231cdc71908089066023d6cf7dec4898be\" returns successfully" Jan 21 23:39:48.414831 containerd[2113]: time="2026-01-21T23:39:48.414776382Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:39:48.417811 containerd[2113]: time="2026-01-21T23:39:48.417720300Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 23:39:48.417995 containerd[2113]: time="2026-01-21T23:39:48.417927716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 23:39:48.418288 kubelet[3545]: E0121 23:39:48.418218 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 23:39:48.418288 kubelet[3545]: E0121 23:39:48.418271 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 23:39:48.419079 kubelet[3545]: E0121 23:39:48.418874 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxhrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-84459bb977-fzglc_calico-system(b1e000bd-2ebe-4f78-af48-a2456035e42f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 23:39:48.420093 kubelet[3545]: E0121 23:39:48.420041 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84459bb977-fzglc" podUID="b1e000bd-2ebe-4f78-af48-a2456035e42f" Jan 21 23:39:48.692251 containerd[2113]: time="2026-01-21T23:39:48.692019086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wsfxd,Uid:40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f,Namespace:calico-system,Attempt:0,}" Jan 21 23:39:48.692718 containerd[2113]: time="2026-01-21T23:39:48.692661324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vmwkp,Uid:03532856-4a1c-4971-af49-0f675b6cbf1f,Namespace:calico-system,Attempt:0,}" Jan 21 23:39:48.842179 systemd-networkd[1703]: cali9fd1e860916: Link UP Jan 21 23:39:48.843318 systemd-networkd[1703]: cali9fd1e860916: Gained carrier Jan 21 23:39:48.860269 containerd[2113]: 2026-01-21 23:39:48.759 [INFO][5276] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--a0ba06055b-k8s-csi--node--driver--vmwkp-eth0 csi-node-driver- calico-system 03532856-4a1c-4971-af49-0f675b6cbf1f 704 0 2026-01-21 23:39:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515.1.0-n-a0ba06055b csi-node-driver-vmwkp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9fd1e860916 [] [] }} ContainerID="8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382" Namespace="calico-system" Pod="csi-node-driver-vmwkp" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-csi--node--driver--vmwkp-" Jan 21 23:39:48.860269 containerd[2113]: 2026-01-21 23:39:48.759 [INFO][5276] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382" Namespace="calico-system" Pod="csi-node-driver-vmwkp" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-csi--node--driver--vmwkp-eth0" Jan 21 23:39:48.860269 containerd[2113]: 2026-01-21 23:39:48.803 [INFO][5297] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382" HandleID="k8s-pod-network.8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382" Workload="ci--4515.1.0--n--a0ba06055b-k8s-csi--node--driver--vmwkp-eth0" Jan 21 23:39:48.860269 containerd[2113]: 2026-01-21 23:39:48.803 [INFO][5297] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382" HandleID="k8s-pod-network.8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382" Workload="ci--4515.1.0--n--a0ba06055b-k8s-csi--node--driver--vmwkp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-n-a0ba06055b", "pod":"csi-node-driver-vmwkp", "timestamp":"2026-01-21 23:39:48.803582574 +0000 UTC"}, Hostname:"ci-4515.1.0-n-a0ba06055b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 23:39:48.860269 containerd[2113]: 2026-01-21 23:39:48.803 [INFO][5297] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 23:39:48.860269 containerd[2113]: 2026-01-21 23:39:48.803 [INFO][5297] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 23:39:48.860269 containerd[2113]: 2026-01-21 23:39:48.803 [INFO][5297] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-a0ba06055b' Jan 21 23:39:48.860269 containerd[2113]: 2026-01-21 23:39:48.810 [INFO][5297] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.860269 containerd[2113]: 2026-01-21 23:39:48.814 [INFO][5297] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.860269 containerd[2113]: 2026-01-21 23:39:48.817 [INFO][5297] ipam/ipam.go 511: Trying affinity for 192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.860269 containerd[2113]: 2026-01-21 23:39:48.819 [INFO][5297] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.860269 containerd[2113]: 2026-01-21 23:39:48.820 [INFO][5297] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.860269 containerd[2113]: 2026-01-21 23:39:48.820 [INFO][5297] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.64/26 handle="k8s-pod-network.8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.860269 containerd[2113]: 2026-01-21 23:39:48.822 [INFO][5297] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382 Jan 21 23:39:48.860269 containerd[2113]: 2026-01-21 23:39:48.827 [INFO][5297] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.64/26 handle="k8s-pod-network.8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.860269 containerd[2113]: 2026-01-21 23:39:48.835 [INFO][5297] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.69/26] block=192.168.73.64/26 handle="k8s-pod-network.8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.860269 containerd[2113]: 2026-01-21 23:39:48.835 [INFO][5297] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.69/26] handle="k8s-pod-network.8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.860269 containerd[2113]: 2026-01-21 23:39:48.835 [INFO][5297] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 23:39:48.860269 containerd[2113]: 2026-01-21 23:39:48.835 [INFO][5297] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.69/26] IPv6=[] ContainerID="8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382" HandleID="k8s-pod-network.8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382" Workload="ci--4515.1.0--n--a0ba06055b-k8s-csi--node--driver--vmwkp-eth0" Jan 21 23:39:48.861096 containerd[2113]: 2026-01-21 23:39:48.836 [INFO][5276] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382" Namespace="calico-system" Pod="csi-node-driver-vmwkp" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-csi--node--driver--vmwkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--a0ba06055b-k8s-csi--node--driver--vmwkp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"03532856-4a1c-4971-af49-0f675b6cbf1f", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 39, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-a0ba06055b", ContainerID:"", Pod:"csi-node-driver-vmwkp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9fd1e860916", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:39:48.861096 containerd[2113]: 2026-01-21 23:39:48.837 [INFO][5276] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.69/32] ContainerID="8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382" Namespace="calico-system" Pod="csi-node-driver-vmwkp" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-csi--node--driver--vmwkp-eth0" Jan 21 23:39:48.861096 containerd[2113]: 2026-01-21 23:39:48.837 [INFO][5276] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9fd1e860916 ContainerID="8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382" Namespace="calico-system" Pod="csi-node-driver-vmwkp" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-csi--node--driver--vmwkp-eth0" Jan 21 23:39:48.861096 containerd[2113]: 2026-01-21 23:39:48.844 [INFO][5276] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382" Namespace="calico-system" Pod="csi-node-driver-vmwkp" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-csi--node--driver--vmwkp-eth0" Jan 21 23:39:48.861096 containerd[2113]: 2026-01-21 23:39:48.845 [INFO][5276] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382" Namespace="calico-system" Pod="csi-node-driver-vmwkp" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-csi--node--driver--vmwkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--a0ba06055b-k8s-csi--node--driver--vmwkp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"03532856-4a1c-4971-af49-0f675b6cbf1f", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 39, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-a0ba06055b", ContainerID:"8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382", Pod:"csi-node-driver-vmwkp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9fd1e860916", MAC:"de:a3:56:95:2a:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:39:48.861096 containerd[2113]: 2026-01-21 23:39:48.856 [INFO][5276] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382" Namespace="calico-system" Pod="csi-node-driver-vmwkp" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-csi--node--driver--vmwkp-eth0" Jan 21 23:39:48.875000 audit[5321]: NETFILTER_CFG table=filter:131 family=2 entries=48 op=nft_register_chain pid=5321 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:39:48.875000 audit[5321]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23140 a0=3 a1=ffffd02bfa50 a2=0 a3=ffff975adfa8 items=0 ppid=4727 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.875000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:39:48.899624 containerd[2113]: time="2026-01-21T23:39:48.899573834Z" level=info msg="connecting to shim 8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382" address="unix:///run/containerd/s/9c928d0b05f3895ecf143cf04c538135ce704860ee5e405c890bcd5e54123f0e" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:39:48.924242 systemd[1]: Started cri-containerd-8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382.scope - libcontainer container 8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382. Jan 21 23:39:48.936000 audit: BPF prog-id=260 op=LOAD Jan 21 23:39:48.937000 audit: BPF prog-id=261 op=LOAD Jan 21 23:39:48.937000 audit[5341]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=5331 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865653737663063306231636630383833356635633863353366383935 Jan 21 23:39:48.937000 audit: BPF prog-id=261 op=UNLOAD Jan 21 23:39:48.937000 audit[5341]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5331 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865653737663063306231636630383833356635633863353366383935 Jan 21 23:39:48.939000 audit: BPF prog-id=262 op=LOAD Jan 21 23:39:48.939000 audit[5341]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=5331 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865653737663063306231636630383833356635633863353366383935 Jan 21 23:39:48.941000 audit: BPF prog-id=263 op=LOAD Jan 21 23:39:48.943363 kubelet[3545]: E0121 23:39:48.942657 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84459bb977-fzglc" podUID="b1e000bd-2ebe-4f78-af48-a2456035e42f" Jan 21 23:39:48.941000 audit[5341]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=5331 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865653737663063306231636630383833356635633863353366383935 Jan 21 23:39:48.943000 audit: BPF prog-id=263 op=UNLOAD Jan 21 23:39:48.943000 audit[5341]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5331 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865653737663063306231636630383833356635633863353366383935 Jan 21 23:39:48.943000 audit: BPF prog-id=262 op=UNLOAD Jan 21 23:39:48.943000 audit[5341]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5331 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865653737663063306231636630383833356635633863353366383935 Jan 21 23:39:48.943000 audit: BPF prog-id=264 op=LOAD Jan 21 23:39:48.943000 audit[5341]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=5331 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:48.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865653737663063306231636630383833356635633863353366383935 Jan 21 23:39:48.956646 systemd-networkd[1703]: calia8cee4bf0f5: Link UP Jan 21 23:39:48.958641 systemd-networkd[1703]: calia8cee4bf0f5: Gained carrier Jan 21 23:39:48.979263 kubelet[3545]: I0121 23:39:48.978412 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-btpgn" podStartSLOduration=38.978392362 podStartE2EDuration="38.978392362s" podCreationTimestamp="2026-01-21 23:39:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 23:39:48.973239023 +0000 UTC m=+46.423098913" watchObservedRunningTime="2026-01-21 23:39:48.978392362 +0000 UTC m=+46.428252268" Jan 21 23:39:48.983883 containerd[2113]: 2026-01-21 23:39:48.780 [INFO][5272] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--a0ba06055b-k8s-goldmane--666569f655--wsfxd-eth0 goldmane-666569f655- calico-system 40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f 809 0 2026-01-21 23:39:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515.1.0-n-a0ba06055b goldmane-666569f655-wsfxd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia8cee4bf0f5 [] [] }} ContainerID="ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90" Namespace="calico-system" Pod="goldmane-666569f655-wsfxd" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-goldmane--666569f655--wsfxd-" Jan 21 23:39:48.983883 containerd[2113]: 2026-01-21 23:39:48.781 [INFO][5272] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90" Namespace="calico-system" Pod="goldmane-666569f655-wsfxd" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-goldmane--666569f655--wsfxd-eth0" Jan 21 23:39:48.983883 containerd[2113]: 2026-01-21 23:39:48.807 [INFO][5302] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90" HandleID="k8s-pod-network.ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90" Workload="ci--4515.1.0--n--a0ba06055b-k8s-goldmane--666569f655--wsfxd-eth0" Jan 21 23:39:48.983883 containerd[2113]: 2026-01-21 23:39:48.808 [INFO][5302] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90" HandleID="k8s-pod-network.ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90" Workload="ci--4515.1.0--n--a0ba06055b-k8s-goldmane--666569f655--wsfxd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-n-a0ba06055b", "pod":"goldmane-666569f655-wsfxd", "timestamp":"2026-01-21 23:39:48.807824073 +0000 UTC"}, Hostname:"ci-4515.1.0-n-a0ba06055b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 23:39:48.983883 containerd[2113]: 2026-01-21 23:39:48.808 [INFO][5302] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 23:39:48.983883 containerd[2113]: 2026-01-21 23:39:48.835 [INFO][5302] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 23:39:48.983883 containerd[2113]: 2026-01-21 23:39:48.836 [INFO][5302] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-a0ba06055b' Jan 21 23:39:48.983883 containerd[2113]: 2026-01-21 23:39:48.911 [INFO][5302] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.983883 containerd[2113]: 2026-01-21 23:39:48.916 [INFO][5302] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.983883 containerd[2113]: 2026-01-21 23:39:48.922 [INFO][5302] ipam/ipam.go 511: Trying affinity for 192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.983883 containerd[2113]: 2026-01-21 23:39:48.924 [INFO][5302] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.983883 containerd[2113]: 2026-01-21 23:39:48.926 [INFO][5302] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.983883 containerd[2113]: 2026-01-21 23:39:48.926 [INFO][5302] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.64/26 handle="k8s-pod-network.ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.983883 containerd[2113]: 2026-01-21 23:39:48.928 [INFO][5302] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90 Jan 21 23:39:48.983883 containerd[2113]: 2026-01-21 23:39:48.932 [INFO][5302] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.64/26 handle="k8s-pod-network.ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.983883 containerd[2113]: 2026-01-21 23:39:48.945 [INFO][5302] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.70/26] block=192.168.73.64/26 handle="k8s-pod-network.ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.983883 containerd[2113]: 2026-01-21 23:39:48.945 [INFO][5302] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.70/26] handle="k8s-pod-network.ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:48.983883 containerd[2113]: 2026-01-21 23:39:48.945 [INFO][5302] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 23:39:48.983883 containerd[2113]: 2026-01-21 23:39:48.945 [INFO][5302] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.70/26] IPv6=[] ContainerID="ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90" HandleID="k8s-pod-network.ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90" Workload="ci--4515.1.0--n--a0ba06055b-k8s-goldmane--666569f655--wsfxd-eth0" Jan 21 23:39:48.984318 containerd[2113]: 2026-01-21 23:39:48.950 [INFO][5272] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90" Namespace="calico-system" Pod="goldmane-666569f655-wsfxd" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-goldmane--666569f655--wsfxd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--a0ba06055b-k8s-goldmane--666569f655--wsfxd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 39, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-a0ba06055b", ContainerID:"", Pod:"goldmane-666569f655-wsfxd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.73.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia8cee4bf0f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:39:48.984318 containerd[2113]: 2026-01-21 23:39:48.950 [INFO][5272] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.70/32] ContainerID="ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90" Namespace="calico-system" Pod="goldmane-666569f655-wsfxd" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-goldmane--666569f655--wsfxd-eth0" Jan 21 23:39:48.984318 containerd[2113]: 2026-01-21 23:39:48.950 [INFO][5272] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8cee4bf0f5 ContainerID="ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90" Namespace="calico-system" Pod="goldmane-666569f655-wsfxd" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-goldmane--666569f655--wsfxd-eth0" Jan 21 23:39:48.984318 containerd[2113]: 2026-01-21 23:39:48.959 [INFO][5272] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90" Namespace="calico-system" Pod="goldmane-666569f655-wsfxd" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-goldmane--666569f655--wsfxd-eth0" Jan 21 23:39:48.984318 containerd[2113]: 2026-01-21 23:39:48.960 [INFO][5272] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90" Namespace="calico-system" Pod="goldmane-666569f655-wsfxd" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-goldmane--666569f655--wsfxd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--a0ba06055b-k8s-goldmane--666569f655--wsfxd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 39, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-a0ba06055b", ContainerID:"ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90", Pod:"goldmane-666569f655-wsfxd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.73.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia8cee4bf0f5", MAC:"e6:97:e2:eb:94:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:39:48.984318 containerd[2113]: 2026-01-21 23:39:48.979 [INFO][5272] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90" Namespace="calico-system" Pod="goldmane-666569f655-wsfxd" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-goldmane--666569f655--wsfxd-eth0" Jan 21 23:39:48.988208 systemd-networkd[1703]: cali028f81e5b84: Gained IPv6LL Jan 21 23:39:48.997229 containerd[2113]: time="2026-01-21T23:39:48.997012888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vmwkp,Uid:03532856-4a1c-4971-af49-0f675b6cbf1f,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ee77f0c0b1cf08835f5c8c53f8953fe03afdaca3ccd5d69928183d988264382\"" Jan 21 23:39:49.000264 containerd[2113]: time="2026-01-21T23:39:49.000178942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 23:39:49.017000 audit[5380]: NETFILTER_CFG table=filter:132 family=2 entries=60 op=nft_register_chain pid=5380 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:39:49.017000 audit[5380]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29932 a0=3 a1=ffffc14c8150 a2=0 a3=ffffa592bfa8 items=0 ppid=4727 pid=5380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:49.017000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:39:49.021669 kubelet[3545]: I0121 23:39:49.021598 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5h2h6" podStartSLOduration=39.021577213 podStartE2EDuration="39.021577213s" podCreationTimestamp="2026-01-21 23:39:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 23:39:48.995812334 +0000 UTC m=+46.445672216" watchObservedRunningTime="2026-01-21 23:39:49.021577213 +0000 UTC m=+46.471437103" Jan 21 23:39:49.024000 audit[5381]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=5381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:49.024000 audit[5381]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff55b5370 a2=0 a3=1 items=0 ppid=3761 pid=5381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:49.024000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:49.027000 audit[5381]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=5381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:49.027000 audit[5381]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffff55b5370 a2=0 a3=1 items=0 ppid=3761 pid=5381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:49.027000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:49.040538 containerd[2113]: time="2026-01-21T23:39:49.040364489Z" level=info msg="connecting to shim ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90" address="unix:///run/containerd/s/5d3cb8c376723d23c837eb531583670415f2a3bae79919bdb9d2a24472a2a207" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:39:49.069312 systemd[1]: Started cri-containerd-ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90.scope - libcontainer container ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90. Jan 21 23:39:49.070000 audit[5416]: NETFILTER_CFG table=filter:135 family=2 entries=17 op=nft_register_rule pid=5416 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:49.070000 audit[5416]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff872e350 a2=0 a3=1 items=0 ppid=3761 pid=5416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:49.070000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:49.072000 audit[5416]: NETFILTER_CFG table=nat:136 family=2 entries=35 op=nft_register_chain pid=5416 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:49.072000 audit[5416]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffff872e350 a2=0 a3=1 items=0 ppid=3761 pid=5416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:49.072000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:49.081000 audit: BPF prog-id=265 op=LOAD Jan 21 23:39:49.082000 audit: BPF prog-id=266 op=LOAD Jan 21 23:39:49.082000 audit[5404]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=5393 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:49.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162303937653066376331653738613830656665376332323837643965 Jan 21 23:39:49.082000 audit: BPF prog-id=266 op=UNLOAD Jan 21 23:39:49.082000 audit[5404]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5393 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:49.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162303937653066376331653738613830656665376332323837643965 Jan 21 23:39:49.082000 audit: BPF prog-id=267 op=LOAD Jan 21 23:39:49.082000 audit[5404]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=5393 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:49.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162303937653066376331653738613830656665376332323837643965 Jan 21 23:39:49.082000 audit: BPF prog-id=268 op=LOAD Jan 21 23:39:49.082000 audit[5404]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=5393 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:49.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162303937653066376331653738613830656665376332323837643965 Jan 21 23:39:49.082000 audit: BPF prog-id=268 op=UNLOAD Jan 21 23:39:49.082000 audit[5404]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5393 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:49.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162303937653066376331653738613830656665376332323837643965 Jan 21 23:39:49.082000 audit: BPF prog-id=267 op=UNLOAD Jan 21 23:39:49.082000 audit[5404]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5393 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:49.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162303937653066376331653738613830656665376332323837643965 Jan 21 23:39:49.082000 audit: BPF prog-id=269 op=LOAD Jan 21 23:39:49.082000 audit[5404]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=5393 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:49.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162303937653066376331653738613830656665376332323837643965 Jan 21 23:39:49.111014 containerd[2113]: time="2026-01-21T23:39:49.110974828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wsfxd,Uid:40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f,Namespace:calico-system,Attempt:0,} returns sandbox id \"ab097e0f7c1e78a80efe7c2287d9e288d81267e9bacca67c9d154edc643b7b90\"" Jan 21 23:39:49.180210 systemd-networkd[1703]: calib01bb7c92ab: Gained IPv6LL Jan 21 23:39:49.266852 containerd[2113]: time="2026-01-21T23:39:49.266792092Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:39:49.269686 containerd[2113]: time="2026-01-21T23:39:49.269636807Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 23:39:49.269765 containerd[2113]: time="2026-01-21T23:39:49.269740819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 23:39:49.270002 kubelet[3545]: E0121 23:39:49.269958 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 23:39:49.270083 kubelet[3545]: E0121 23:39:49.270014 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 23:39:49.270259 kubelet[3545]: E0121 23:39:49.270227 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc6dl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vmwkp_calico-system(03532856-4a1c-4971-af49-0f675b6cbf1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 23:39:49.270870 containerd[2113]: time="2026-01-21T23:39:49.270837537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 23:39:49.491216 containerd[2113]: time="2026-01-21T23:39:49.491166249Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:39:49.493949 containerd[2113]: time="2026-01-21T23:39:49.493906648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 23:39:49.494034 containerd[2113]: time="2026-01-21T23:39:49.493999443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 23:39:49.494221 kubelet[3545]: E0121 23:39:49.494184 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 23:39:49.494523 kubelet[3545]: E0121 23:39:49.494234 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 23:39:49.494523 kubelet[3545]: E0121 23:39:49.494426 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkjgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wsfxd_calico-system(40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 23:39:49.495408 containerd[2113]: time="2026-01-21T23:39:49.495380219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 23:39:49.495687 kubelet[3545]: E0121 23:39:49.495658 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wsfxd" podUID="40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f" Jan 21 23:39:49.690947 containerd[2113]: time="2026-01-21T23:39:49.690728976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f4fc7866-d746f,Uid:fe4222fd-b3e4-4022-8b44-793668b7e61d,Namespace:calico-apiserver,Attempt:0,}" Jan 21 23:39:49.691153 containerd[2113]: time="2026-01-21T23:39:49.690729752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f4fc7866-w9csf,Uid:67e69fef-e284-4866-bf16-ca5d0645fcac,Namespace:calico-apiserver,Attempt:0,}" Jan 21 23:39:49.791620 containerd[2113]: time="2026-01-21T23:39:49.791572932Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:39:49.820203 systemd-networkd[1703]: cali458adeaa12b: Link UP Jan 21 23:39:49.822085 systemd-networkd[1703]: cali458adeaa12b: Gained carrier Jan 21 23:39:49.838530 containerd[2113]: time="2026-01-21T23:39:49.837836466Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 23:39:49.838530 containerd[2113]: time="2026-01-21T23:39:49.837945574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 23:39:49.838831 kubelet[3545]: E0121 23:39:49.838254 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 23:39:49.838831 kubelet[3545]: E0121 23:39:49.838309 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 23:39:49.838831 kubelet[3545]: E0121 23:39:49.838462 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc6dl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vmwkp_calico-system(03532856-4a1c-4971-af49-0f675b6cbf1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 23:39:49.839444 containerd[2113]: 2026-01-21 23:39:49.752 [INFO][5433] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--d746f-eth0 calico-apiserver-86f4fc7866- calico-apiserver fe4222fd-b3e4-4022-8b44-793668b7e61d 810 0 2026-01-21 23:39:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86f4fc7866 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-n-a0ba06055b calico-apiserver-86f4fc7866-d746f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali458adeaa12b [] [] }} ContainerID="01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fc7866-d746f" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--d746f-" Jan 21 23:39:49.839444 containerd[2113]: 2026-01-21 23:39:49.752 [INFO][5433] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fc7866-d746f" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--d746f-eth0" Jan 21 23:39:49.839444 containerd[2113]: 2026-01-21 23:39:49.779 [INFO][5457] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89" HandleID="k8s-pod-network.01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89" Workload="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--d746f-eth0" Jan 21 23:39:49.839444 containerd[2113]: 2026-01-21 23:39:49.779 [INFO][5457] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89" HandleID="k8s-pod-network.01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89" Workload="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--d746f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb5a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-n-a0ba06055b", "pod":"calico-apiserver-86f4fc7866-d746f", "timestamp":"2026-01-21 23:39:49.779095371 +0000 UTC"}, Hostname:"ci-4515.1.0-n-a0ba06055b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 23:39:49.839444 containerd[2113]: 2026-01-21 23:39:49.779 [INFO][5457] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 23:39:49.839444 containerd[2113]: 2026-01-21 23:39:49.779 [INFO][5457] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 23:39:49.839444 containerd[2113]: 2026-01-21 23:39:49.779 [INFO][5457] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-a0ba06055b' Jan 21 23:39:49.839444 containerd[2113]: 2026-01-21 23:39:49.787 [INFO][5457] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:49.839444 containerd[2113]: 2026-01-21 23:39:49.792 [INFO][5457] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:49.839444 containerd[2113]: 2026-01-21 23:39:49.796 [INFO][5457] ipam/ipam.go 511: Trying affinity for 192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:49.839444 containerd[2113]: 2026-01-21 23:39:49.798 [INFO][5457] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:49.839444 containerd[2113]: 2026-01-21 23:39:49.800 [INFO][5457] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:49.839444 containerd[2113]: 2026-01-21 23:39:49.800 [INFO][5457] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.64/26 handle="k8s-pod-network.01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:49.839444 containerd[2113]: 2026-01-21 23:39:49.801 [INFO][5457] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89 Jan 21 23:39:49.839444 containerd[2113]: 2026-01-21 23:39:49.805 [INFO][5457] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.64/26 handle="k8s-pod-network.01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:49.839444 containerd[2113]: 2026-01-21 23:39:49.812 [INFO][5457] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.71/26] block=192.168.73.64/26 handle="k8s-pod-network.01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:49.839444 containerd[2113]: 2026-01-21 23:39:49.813 [INFO][5457] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.71/26] handle="k8s-pod-network.01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:49.839444 containerd[2113]: 2026-01-21 23:39:49.814 [INFO][5457] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 23:39:49.839444 containerd[2113]: 2026-01-21 23:39:49.814 [INFO][5457] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.71/26] IPv6=[] ContainerID="01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89" HandleID="k8s-pod-network.01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89" Workload="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--d746f-eth0" Jan 21 23:39:49.841435 containerd[2113]: 2026-01-21 23:39:49.816 [INFO][5433] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fc7866-d746f" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--d746f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--d746f-eth0", GenerateName:"calico-apiserver-86f4fc7866-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe4222fd-b3e4-4022-8b44-793668b7e61d", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 39, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f4fc7866", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-a0ba06055b", ContainerID:"", Pod:"calico-apiserver-86f4fc7866-d746f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali458adeaa12b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:39:49.841435 containerd[2113]: 2026-01-21 23:39:49.816 [INFO][5433] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.71/32] ContainerID="01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fc7866-d746f" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--d746f-eth0" Jan 21 23:39:49.841435 containerd[2113]: 2026-01-21 23:39:49.816 [INFO][5433] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali458adeaa12b ContainerID="01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fc7866-d746f" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--d746f-eth0" Jan 21 23:39:49.841435 containerd[2113]: 2026-01-21 23:39:49.822 [INFO][5433] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fc7866-d746f" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--d746f-eth0" Jan 21 23:39:49.841435 containerd[2113]: 2026-01-21 23:39:49.823 [INFO][5433] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fc7866-d746f" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--d746f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--d746f-eth0", GenerateName:"calico-apiserver-86f4fc7866-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe4222fd-b3e4-4022-8b44-793668b7e61d", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 39, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f4fc7866", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-a0ba06055b", ContainerID:"01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89", Pod:"calico-apiserver-86f4fc7866-d746f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali458adeaa12b", MAC:"76:ac:6f:cd:5f:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:39:49.841435 containerd[2113]: 2026-01-21 23:39:49.835 [INFO][5433] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fc7866-d746f" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--d746f-eth0" Jan 21 23:39:49.841732 kubelet[3545]: E0121 23:39:49.839640 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vmwkp" podUID="03532856-4a1c-4971-af49-0f675b6cbf1f" Jan 21 23:39:49.857000 audit[5479]: NETFILTER_CFG table=filter:137 family=2 entries=76 op=nft_register_chain pid=5479 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:39:49.857000 audit[5479]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=37000 a0=3 a1=ffffdce51800 a2=0 a3=ffff9455efa8 items=0 ppid=4727 pid=5479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:49.857000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:39:49.931927 containerd[2113]: time="2026-01-21T23:39:49.931857738Z" level=info msg="connecting to shim 01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89" address="unix:///run/containerd/s/737d787c0b9b3b69b2a85d8af80655c370170f981b251d5518473d92eae9e1f9" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:39:49.932404 systemd-networkd[1703]: cali8e421bad3fd: Link UP Jan 21 23:39:49.932517 systemd-networkd[1703]: cali8e421bad3fd: Gained carrier Jan 21 23:39:49.951561 containerd[2113]: 2026-01-21 23:39:49.759 [INFO][5442] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--w9csf-eth0 calico-apiserver-86f4fc7866- calico-apiserver 67e69fef-e284-4866-bf16-ca5d0645fcac 811 0 2026-01-21 23:39:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86f4fc7866 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-n-a0ba06055b calico-apiserver-86f4fc7866-w9csf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8e421bad3fd [] [] }} ContainerID="473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fc7866-w9csf" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--w9csf-" Jan 21 23:39:49.951561 containerd[2113]: 2026-01-21 23:39:49.759 [INFO][5442] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fc7866-w9csf" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--w9csf-eth0" Jan 21 23:39:49.951561 containerd[2113]: 2026-01-21 23:39:49.784 [INFO][5463] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc" HandleID="k8s-pod-network.473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc" Workload="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--w9csf-eth0" Jan 21 23:39:49.951561 containerd[2113]: 2026-01-21 23:39:49.785 [INFO][5463] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc" HandleID="k8s-pod-network.473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc" Workload="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--w9csf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000255ce0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-n-a0ba06055b", "pod":"calico-apiserver-86f4fc7866-w9csf", "timestamp":"2026-01-21 23:39:49.784855387 +0000 UTC"}, Hostname:"ci-4515.1.0-n-a0ba06055b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 23:39:49.951561 containerd[2113]: 2026-01-21 23:39:49.785 [INFO][5463] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 23:39:49.951561 containerd[2113]: 2026-01-21 23:39:49.813 [INFO][5463] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 23:39:49.951561 containerd[2113]: 2026-01-21 23:39:49.813 [INFO][5463] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-a0ba06055b' Jan 21 23:39:49.951561 containerd[2113]: 2026-01-21 23:39:49.887 [INFO][5463] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:49.951561 containerd[2113]: 2026-01-21 23:39:49.892 [INFO][5463] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:49.951561 containerd[2113]: 2026-01-21 23:39:49.900 [INFO][5463] ipam/ipam.go 511: Trying affinity for 192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:49.951561 containerd[2113]: 2026-01-21 23:39:49.902 [INFO][5463] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:49.951561 containerd[2113]: 2026-01-21 23:39:49.904 [INFO][5463] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.64/26 host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:49.951561 containerd[2113]: 2026-01-21 23:39:49.904 [INFO][5463] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.64/26 handle="k8s-pod-network.473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:49.951561 containerd[2113]: 2026-01-21 23:39:49.905 [INFO][5463] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc Jan 21 23:39:49.951561 containerd[2113]: 2026-01-21 23:39:49.913 [INFO][5463] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.64/26 handle="k8s-pod-network.473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:49.951561 containerd[2113]: 2026-01-21 23:39:49.925 [INFO][5463] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.72/26] block=192.168.73.64/26 handle="k8s-pod-network.473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:49.951561 containerd[2113]: 2026-01-21 23:39:49.925 [INFO][5463] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.72/26] handle="k8s-pod-network.473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc" host="ci-4515.1.0-n-a0ba06055b" Jan 21 23:39:49.951561 containerd[2113]: 2026-01-21 23:39:49.925 [INFO][5463] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 23:39:49.951561 containerd[2113]: 2026-01-21 23:39:49.925 [INFO][5463] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.72/26] IPv6=[] ContainerID="473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc" HandleID="k8s-pod-network.473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc" Workload="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--w9csf-eth0" Jan 21 23:39:49.952723 containerd[2113]: 2026-01-21 23:39:49.928 [INFO][5442] cni-plugin/k8s.go 418: Populated endpoint ContainerID="473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fc7866-w9csf" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--w9csf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--w9csf-eth0", GenerateName:"calico-apiserver-86f4fc7866-", Namespace:"calico-apiserver", SelfLink:"", UID:"67e69fef-e284-4866-bf16-ca5d0645fcac", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 39, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f4fc7866", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-a0ba06055b", ContainerID:"", Pod:"calico-apiserver-86f4fc7866-w9csf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e421bad3fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:39:49.952723 containerd[2113]: 2026-01-21 23:39:49.928 [INFO][5442] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.72/32] ContainerID="473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fc7866-w9csf" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--w9csf-eth0" Jan 21 23:39:49.952723 containerd[2113]: 2026-01-21 23:39:49.929 [INFO][5442] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e421bad3fd ContainerID="473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fc7866-w9csf" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--w9csf-eth0" Jan 21 23:39:49.952723 containerd[2113]: 2026-01-21 23:39:49.931 [INFO][5442] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fc7866-w9csf" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--w9csf-eth0" Jan 21 23:39:49.952723 containerd[2113]: 2026-01-21 23:39:49.932 [INFO][5442] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fc7866-w9csf" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--w9csf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--w9csf-eth0", GenerateName:"calico-apiserver-86f4fc7866-", Namespace:"calico-apiserver", SelfLink:"", UID:"67e69fef-e284-4866-bf16-ca5d0645fcac", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 39, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f4fc7866", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-a0ba06055b", ContainerID:"473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc", Pod:"calico-apiserver-86f4fc7866-w9csf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e421bad3fd", MAC:"a2:4c:ba:a7:b9:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:39:49.952723 containerd[2113]: 2026-01-21 23:39:49.948 [INFO][5442] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fc7866-w9csf" WorkloadEndpoint="ci--4515.1.0--n--a0ba06055b-k8s-calico--apiserver--86f4fc7866--w9csf-eth0" Jan 21 23:39:49.955296 kubelet[3545]: E0121 23:39:49.955257 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wsfxd" podUID="40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f" Jan 21 23:39:49.961525 kubelet[3545]: E0121 23:39:49.961255 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84459bb977-fzglc" podUID="b1e000bd-2ebe-4f78-af48-a2456035e42f" Jan 21 23:39:49.964255 kubelet[3545]: E0121 23:39:49.964217 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vmwkp" podUID="03532856-4a1c-4971-af49-0f675b6cbf1f" Jan 21 23:39:49.983410 systemd[1]: Started cri-containerd-01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89.scope - libcontainer container 01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89. Jan 21 23:39:49.975000 audit[5514]: NETFILTER_CFG table=filter:138 family=2 entries=57 op=nft_register_chain pid=5514 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:39:49.975000 audit[5514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27812 a0=3 a1=fffffdfbae70 a2=0 a3=ffff9bfdcfa8 items=0 ppid=4727 pid=5514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:49.975000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:39:50.012278 systemd-networkd[1703]: cali13559461ec0: Gained IPv6LL Jan 21 23:39:50.024626 containerd[2113]: time="2026-01-21T23:39:50.024582108Z" level=info msg="connecting to shim 473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc" address="unix:///run/containerd/s/79b1043a04c63f370dbcf6adc2ac7b2c093437ca52aa490de80dc9b08d5e074f" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:39:50.062215 systemd[1]: Started cri-containerd-473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc.scope - libcontainer container 473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc. Jan 21 23:39:50.076236 systemd-networkd[1703]: cali9fd1e860916: Gained IPv6LL Jan 21 23:39:50.089000 audit: BPF prog-id=270 op=LOAD Jan 21 23:39:50.090000 audit: BPF prog-id=271 op=LOAD Jan 21 23:39:50.090000 audit[5545]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5533 pid=5545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:50.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437336265626466386632306539636337613864306461636439363335 Jan 21 23:39:50.090000 audit: BPF prog-id=271 op=UNLOAD Jan 21 23:39:50.090000 audit[5545]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5533 pid=5545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:50.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437336265626466386632306539636337613864306461636439363335 Jan 21 23:39:50.090000 audit: BPF prog-id=272 op=LOAD Jan 21 23:39:50.090000 audit[5545]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5533 pid=5545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:50.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437336265626466386632306539636337613864306461636439363335 Jan 21 23:39:50.090000 audit: BPF prog-id=273 op=LOAD Jan 21 23:39:50.090000 audit[5545]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5533 pid=5545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:50.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437336265626466386632306539636337613864306461636439363335 Jan 21 23:39:50.090000 audit: BPF prog-id=273 op=UNLOAD Jan 21 23:39:50.090000 audit[5545]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5533 pid=5545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:50.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437336265626466386632306539636337613864306461636439363335 Jan 21 23:39:50.091000 audit: BPF prog-id=272 op=UNLOAD Jan 21 23:39:50.091000 audit[5545]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5533 pid=5545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:50.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437336265626466386632306539636337613864306461636439363335 Jan 21 23:39:50.091000 audit: BPF prog-id=274 op=LOAD Jan 21 23:39:50.091000 audit[5545]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5533 pid=5545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:50.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437336265626466386632306539636337613864306461636439363335 Jan 21 23:39:50.096000 audit: BPF prog-id=275 op=LOAD Jan 21 23:39:50.096000 audit: BPF prog-id=276 op=LOAD Jan 21 23:39:50.096000 audit[5501]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000228180 a2=98 a3=0 items=0 ppid=5489 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:50.096000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031626661646262303832316663633261623663393365383034383562 Jan 21 23:39:50.097000 audit: BPF prog-id=276 op=UNLOAD Jan 21 23:39:50.097000 audit[5501]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5489 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:50.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031626661646262303832316663633261623663393365383034383562 Jan 21 23:39:50.097000 audit: BPF prog-id=277 op=LOAD Jan 21 23:39:50.097000 audit[5501]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40002283e8 a2=98 a3=0 items=0 ppid=5489 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:50.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031626661646262303832316663633261623663393365383034383562 Jan 21 23:39:50.097000 audit: BPF prog-id=278 op=LOAD Jan 21 23:39:50.097000 audit[5501]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000228168 a2=98 a3=0 items=0 ppid=5489 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:50.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031626661646262303832316663633261623663393365383034383562 Jan 21 23:39:50.097000 audit: BPF prog-id=278 op=UNLOAD Jan 21 23:39:50.097000 audit[5501]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5489 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:50.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031626661646262303832316663633261623663393365383034383562 Jan 21 23:39:50.097000 audit: BPF prog-id=277 op=UNLOAD Jan 21 23:39:50.097000 audit[5501]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5489 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:50.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031626661646262303832316663633261623663393365383034383562 Jan 21 23:39:50.097000 audit: BPF prog-id=279 op=LOAD Jan 21 23:39:50.097000 audit[5501]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000228648 a2=98 a3=0 items=0 ppid=5489 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:50.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031626661646262303832316663633261623663393365383034383562 Jan 21 23:39:50.113000 audit[5567]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=5567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:50.113000 audit[5567]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd8337150 a2=0 a3=1 items=0 ppid=3761 pid=5567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:50.113000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:50.126000 audit[5567]: NETFILTER_CFG table=nat:140 family=2 entries=56 op=nft_register_chain pid=5567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:50.126000 audit[5567]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffd8337150 a2=0 a3=1 items=0 ppid=3761 pid=5567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:50.126000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:50.129014 containerd[2113]: time="2026-01-21T23:39:50.128829535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f4fc7866-w9csf,Uid:67e69fef-e284-4866-bf16-ca5d0645fcac,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"473bebdf8f20e9cc7a8d0dacd96352529707d38e2b066d30d3a48b02278866cc\"" Jan 21 23:39:50.132730 containerd[2113]: time="2026-01-21T23:39:50.131722851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 23:39:50.133384 containerd[2113]: time="2026-01-21T23:39:50.133223847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f4fc7866-d746f,Uid:fe4222fd-b3e4-4022-8b44-793668b7e61d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"01bfadbb0821fcc2ab6c93e80485b1d63c1fe5cd8879420cf6bae5ef8b73de89\"" Jan 21 23:39:50.417879 containerd[2113]: time="2026-01-21T23:39:50.417832494Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:39:50.420667 containerd[2113]: time="2026-01-21T23:39:50.420624783Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 23:39:50.420667 containerd[2113]: time="2026-01-21T23:39:50.420699498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 23:39:50.421070 kubelet[3545]: E0121 23:39:50.421014 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:39:50.421148 kubelet[3545]: E0121 23:39:50.421085 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:39:50.421742 containerd[2113]: time="2026-01-21T23:39:50.421368961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 23:39:50.421917 kubelet[3545]: E0121 23:39:50.421525 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4r6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86f4fc7866-w9csf_calico-apiserver(67e69fef-e284-4866-bf16-ca5d0645fcac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 23:39:50.423064 kubelet[3545]: E0121 23:39:50.423015 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-w9csf" podUID="67e69fef-e284-4866-bf16-ca5d0645fcac" Jan 21 23:39:50.679962 containerd[2113]: time="2026-01-21T23:39:50.679834949Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:39:50.682947 containerd[2113]: time="2026-01-21T23:39:50.682903015Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 23:39:50.683060 containerd[2113]: time="2026-01-21T23:39:50.682993378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 23:39:50.683189 kubelet[3545]: E0121 23:39:50.683151 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:39:50.683551 kubelet[3545]: E0121 23:39:50.683198 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:39:50.683551 kubelet[3545]: E0121 23:39:50.683309 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pszq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86f4fc7866-d746f_calico-apiserver(fe4222fd-b3e4-4022-8b44-793668b7e61d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 23:39:50.684487 kubelet[3545]: E0121 23:39:50.684435 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-d746f" podUID="fe4222fd-b3e4-4022-8b44-793668b7e61d" Jan 21 23:39:50.780629 systemd-networkd[1703]: calia8cee4bf0f5: Gained IPv6LL Jan 21 23:39:50.965643 kubelet[3545]: E0121 23:39:50.965600 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-w9csf" podUID="67e69fef-e284-4866-bf16-ca5d0645fcac" Jan 21 23:39:50.968144 kubelet[3545]: E0121 23:39:50.968104 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wsfxd" podUID="40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f" Jan 21 23:39:50.968865 kubelet[3545]: E0121 23:39:50.968639 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-d746f" podUID="fe4222fd-b3e4-4022-8b44-793668b7e61d" Jan 21 23:39:50.969776 kubelet[3545]: E0121 23:39:50.969595 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vmwkp" podUID="03532856-4a1c-4971-af49-0f675b6cbf1f" Jan 21 23:39:50.972280 systemd-networkd[1703]: cali8e421bad3fd: Gained IPv6LL Jan 21 23:39:51.000000 audit[5587]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=5587 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:51.000000 audit[5587]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffed28d850 a2=0 a3=1 items=0 ppid=3761 pid=5587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:51.000000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:51.006000 audit[5587]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=5587 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:51.006000 audit[5587]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffed28d850 a2=0 a3=1 items=0 ppid=3761 pid=5587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:51.006000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:51.868288 systemd-networkd[1703]: cali458adeaa12b: Gained IPv6LL Jan 21 23:39:51.968346 kubelet[3545]: E0121 23:39:51.968298 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-d746f" podUID="fe4222fd-b3e4-4022-8b44-793668b7e61d" Jan 21 23:39:51.969949 kubelet[3545]: E0121 23:39:51.969386 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-w9csf" podUID="67e69fef-e284-4866-bf16-ca5d0645fcac" Jan 21 23:39:52.021000 audit[5595]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5595 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:52.021000 audit[5595]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd4e0c690 a2=0 a3=1 items=0 ppid=3761 pid=5595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:52.021000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:52.033000 audit[5595]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5595 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:39:52.033000 audit[5595]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd4e0c690 a2=0 a3=1 items=0 ppid=3761 pid=5595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:52.033000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:39:53.313683 kubelet[3545]: I0121 23:39:53.313630 3545 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 23:39:53.691247 containerd[2113]: time="2026-01-21T23:39:53.691123506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 23:39:53.971569 containerd[2113]: time="2026-01-21T23:39:53.971447380Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:39:53.975531 containerd[2113]: time="2026-01-21T23:39:53.975419942Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 23:39:53.975531 containerd[2113]: time="2026-01-21T23:39:53.975494017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 23:39:53.975863 kubelet[3545]: E0121 23:39:53.975749 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 23:39:53.975863 kubelet[3545]: E0121 23:39:53.975804 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 23:39:53.976099 kubelet[3545]: E0121 23:39:53.976073 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:34f3459d8c144933a66ddd93a201138a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xjhg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d9759fcd4-9gjqv_calico-system(fd7f2311-8da9-446b-ab8a-7da03038d65b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 23:39:53.977988 containerd[2113]: time="2026-01-21T23:39:53.977921061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 23:39:54.246010 containerd[2113]: time="2026-01-21T23:39:54.245581048Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:39:54.248612 containerd[2113]: time="2026-01-21T23:39:54.248563983Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 23:39:54.248699 containerd[2113]: time="2026-01-21T23:39:54.248664891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 23:39:54.248902 kubelet[3545]: E0121 23:39:54.248864 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 23:39:54.248955 kubelet[3545]: E0121 23:39:54.248915 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 23:39:54.249490 kubelet[3545]: E0121 23:39:54.249040 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xjhg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d9759fcd4-9gjqv_calico-system(fd7f2311-8da9-446b-ab8a-7da03038d65b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 23:39:54.250315 kubelet[3545]: E0121 23:39:54.250282 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d9759fcd4-9gjqv" podUID="fd7f2311-8da9-446b-ab8a-7da03038d65b" Jan 21 23:40:02.692678 containerd[2113]: time="2026-01-21T23:40:02.692638142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 23:40:02.964680 containerd[2113]: time="2026-01-21T23:40:02.964474699Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:40:02.967318 containerd[2113]: time="2026-01-21T23:40:02.967258598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 23:40:02.967668 containerd[2113]: time="2026-01-21T23:40:02.967281151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 23:40:02.967766 kubelet[3545]: E0121 23:40:02.967709 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:40:02.967766 kubelet[3545]: E0121 23:40:02.967765 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:40:02.968122 kubelet[3545]: E0121 23:40:02.967885 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pszq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86f4fc7866-d746f_calico-apiserver(fe4222fd-b3e4-4022-8b44-793668b7e61d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 23:40:02.969402 kubelet[3545]: E0121 23:40:02.969333 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-d746f" podUID="fe4222fd-b3e4-4022-8b44-793668b7e61d" Jan 21 23:40:03.690856 containerd[2113]: time="2026-01-21T23:40:03.690618484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 23:40:03.987526 containerd[2113]: time="2026-01-21T23:40:03.987471847Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:40:03.992031 containerd[2113]: time="2026-01-21T23:40:03.991898369Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 23:40:03.992031 containerd[2113]: time="2026-01-21T23:40:03.991985348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 23:40:03.992417 kubelet[3545]: E0121 23:40:03.992218 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 23:40:03.992417 kubelet[3545]: E0121 23:40:03.992255 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 23:40:03.992417 kubelet[3545]: E0121 23:40:03.992387 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxhrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-84459bb977-fzglc_calico-system(b1e000bd-2ebe-4f78-af48-a2456035e42f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 23:40:03.994303 kubelet[3545]: E0121 23:40:03.994248 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84459bb977-fzglc" podUID="b1e000bd-2ebe-4f78-af48-a2456035e42f" Jan 21 23:40:04.691309 containerd[2113]: time="2026-01-21T23:40:04.691220414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 23:40:04.695795 kubelet[3545]: E0121 23:40:04.695744 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d9759fcd4-9gjqv" podUID="fd7f2311-8da9-446b-ab8a-7da03038d65b" Jan 21 23:40:05.044452 containerd[2113]: time="2026-01-21T23:40:05.044398683Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:40:05.047272 containerd[2113]: time="2026-01-21T23:40:05.047229541Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 23:40:05.047363 containerd[2113]: time="2026-01-21T23:40:05.047319208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 23:40:05.047712 kubelet[3545]: E0121 23:40:05.047502 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 23:40:05.047712 kubelet[3545]: E0121 23:40:05.047558 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 23:40:05.049277 kubelet[3545]: E0121 23:40:05.047756 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc6dl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vmwkp_calico-system(03532856-4a1c-4971-af49-0f675b6cbf1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 23:40:05.049346 containerd[2113]: time="2026-01-21T23:40:05.047946006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 23:40:05.329639 containerd[2113]: time="2026-01-21T23:40:05.329515359Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:40:05.332998 containerd[2113]: time="2026-01-21T23:40:05.332889188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 23:40:05.332998 containerd[2113]: time="2026-01-21T23:40:05.332936237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 23:40:05.333225 kubelet[3545]: E0121 23:40:05.333171 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 23:40:05.333511 kubelet[3545]: E0121 23:40:05.333228 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 23:40:05.333665 containerd[2113]: time="2026-01-21T23:40:05.333519785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 23:40:05.333899 kubelet[3545]: E0121 23:40:05.333748 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkjgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wsfxd_calico-system(40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 23:40:05.335335 kubelet[3545]: E0121 23:40:05.335282 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wsfxd" podUID="40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f" Jan 21 23:40:05.579464 containerd[2113]: time="2026-01-21T23:40:05.579406788Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:40:05.582568 containerd[2113]: time="2026-01-21T23:40:05.582464030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 23:40:05.582631 containerd[2113]: time="2026-01-21T23:40:05.582559433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 23:40:05.582835 kubelet[3545]: E0121 23:40:05.582787 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 23:40:05.582904 kubelet[3545]: E0121 23:40:05.582845 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 23:40:05.584065 kubelet[3545]: E0121 23:40:05.583497 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc6dl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vmwkp_calico-system(03532856-4a1c-4971-af49-0f675b6cbf1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 23:40:05.584663 kubelet[3545]: E0121 23:40:05.584609 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vmwkp" podUID="03532856-4a1c-4971-af49-0f675b6cbf1f" Jan 21 23:40:06.693354 containerd[2113]: time="2026-01-21T23:40:06.693225005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 23:40:06.947248 containerd[2113]: time="2026-01-21T23:40:06.946827156Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:40:06.950005 containerd[2113]: time="2026-01-21T23:40:06.949848557Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 23:40:06.950005 containerd[2113]: time="2026-01-21T23:40:06.949950760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 23:40:06.950455 kubelet[3545]: E0121 23:40:06.950345 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:40:06.950455 kubelet[3545]: E0121 23:40:06.950410 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:40:06.950943 kubelet[3545]: E0121 23:40:06.950890 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4r6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86f4fc7866-w9csf_calico-apiserver(67e69fef-e284-4866-bf16-ca5d0645fcac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 23:40:06.952109 kubelet[3545]: E0121 23:40:06.952066 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-w9csf" podUID="67e69fef-e284-4866-bf16-ca5d0645fcac" Jan 21 23:40:15.692241 kubelet[3545]: E0121 23:40:15.691840 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wsfxd" podUID="40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f" Jan 21 23:40:15.694396 kubelet[3545]: E0121 23:40:15.693254 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-d746f" podUID="fe4222fd-b3e4-4022-8b44-793668b7e61d" Jan 21 23:40:15.762624 systemd[1]: Started sshd@7-10.200.20.29:22-10.200.16.10:50496.service - OpenSSH per-connection server daemon (10.200.16.10:50496). Jan 21 23:40:15.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.29:22-10.200.16.10:50496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:15.766996 kernel: kauditd_printk_skb: 239 callbacks suppressed Jan 21 23:40:15.767155 kernel: audit: type=1130 audit(1769038815.761:757): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.29:22-10.200.16.10:50496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:16.215000 audit[5668]: USER_ACCT pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:16.217491 sshd[5668]: Accepted publickey for core from 10.200.16.10 port 50496 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:40:16.235000 audit[5668]: CRED_ACQ pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:16.252938 kernel: audit: type=1101 audit(1769038816.215:758): pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:16.253085 kernel: audit: type=1103 audit(1769038816.235:759): pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:16.254582 sshd-session[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:40:16.263444 kernel: audit: type=1006 audit(1769038816.235:760): pid=5668 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 21 23:40:16.235000 audit[5668]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc7e7d1d0 a2=3 a3=0 items=0 ppid=1 pid=5668 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:16.282813 kernel: audit: type=1300 audit(1769038816.235:760): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc7e7d1d0 a2=3 a3=0 items=0 ppid=1 pid=5668 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:16.235000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:40:16.290115 kernel: audit: type=1327 audit(1769038816.235:760): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:40:16.294108 systemd-logind[2082]: New session 10 of user core. Jan 21 23:40:16.299262 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 21 23:40:16.302000 audit[5668]: USER_START pid=5668 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:16.321000 audit[5671]: CRED_ACQ pid=5671 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:16.338403 kernel: audit: type=1105 audit(1769038816.302:761): pid=5668 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:16.338544 kernel: audit: type=1103 audit(1769038816.321:762): pid=5671 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:16.539184 sshd[5671]: Connection closed by 10.200.16.10 port 50496 Jan 21 23:40:16.539987 sshd-session[5668]: pam_unix(sshd:session): session closed for user core Jan 21 23:40:16.540000 audit[5668]: USER_END pid=5668 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:16.544581 systemd[1]: sshd@7-10.200.20.29:22-10.200.16.10:50496.service: Deactivated successfully. Jan 21 23:40:16.547429 systemd[1]: session-10.scope: Deactivated successfully. Jan 21 23:40:16.561765 systemd-logind[2082]: Session 10 logged out. Waiting for processes to exit. Jan 21 23:40:16.540000 audit[5668]: CRED_DISP pid=5668 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:16.563184 systemd-logind[2082]: Removed session 10. Jan 21 23:40:16.576429 kernel: audit: type=1106 audit(1769038816.540:763): pid=5668 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:16.576584 kernel: audit: type=1104 audit(1769038816.540:764): pid=5668 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:16.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.29:22-10.200.16.10:50496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:18.692098 kubelet[3545]: E0121 23:40:18.691824 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84459bb977-fzglc" podUID="b1e000bd-2ebe-4f78-af48-a2456035e42f" Jan 21 23:40:18.693576 containerd[2113]: time="2026-01-21T23:40:18.693305654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 23:40:18.694018 kubelet[3545]: E0121 23:40:18.693526 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vmwkp" podUID="03532856-4a1c-4971-af49-0f675b6cbf1f" Jan 21 23:40:18.968270 containerd[2113]: time="2026-01-21T23:40:18.968217586Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:40:18.972370 containerd[2113]: time="2026-01-21T23:40:18.972317417Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 23:40:18.972571 containerd[2113]: time="2026-01-21T23:40:18.972349722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 23:40:18.972722 kubelet[3545]: E0121 23:40:18.972677 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 23:40:18.972784 kubelet[3545]: E0121 23:40:18.972733 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 23:40:18.972866 kubelet[3545]: E0121 23:40:18.972834 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:34f3459d8c144933a66ddd93a201138a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xjhg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d9759fcd4-9gjqv_calico-system(fd7f2311-8da9-446b-ab8a-7da03038d65b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 23:40:18.975012 containerd[2113]: time="2026-01-21T23:40:18.974964349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 23:40:19.258504 containerd[2113]: time="2026-01-21T23:40:19.258358060Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:40:19.328525 containerd[2113]: time="2026-01-21T23:40:19.328440660Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 23:40:19.328693 containerd[2113]: time="2026-01-21T23:40:19.328559264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 23:40:19.329070 kubelet[3545]: E0121 23:40:19.328922 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 23:40:19.330496 kubelet[3545]: E0121 23:40:19.330370 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 23:40:19.331314 kubelet[3545]: E0121 23:40:19.331020 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xjhg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d9759fcd4-9gjqv_calico-system(fd7f2311-8da9-446b-ab8a-7da03038d65b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 23:40:19.332497 kubelet[3545]: E0121 23:40:19.332458 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d9759fcd4-9gjqv" podUID="fd7f2311-8da9-446b-ab8a-7da03038d65b" Jan 21 23:40:19.691983 kubelet[3545]: E0121 23:40:19.691532 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-w9csf" podUID="67e69fef-e284-4866-bf16-ca5d0645fcac" Jan 21 23:40:21.624224 systemd[1]: Started sshd@8-10.200.20.29:22-10.200.16.10:42206.service - OpenSSH per-connection server daemon (10.200.16.10:42206). Jan 21 23:40:21.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.29:22-10.200.16.10:42206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:21.627681 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 23:40:21.627855 kernel: audit: type=1130 audit(1769038821.623:766): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.29:22-10.200.16.10:42206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:22.042524 sshd[5695]: Accepted publickey for core from 10.200.16.10 port 42206 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:40:22.041000 audit[5695]: USER_ACCT pid=5695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:22.058000 audit[5695]: CRED_ACQ pid=5695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:22.067136 kernel: audit: type=1101 audit(1769038822.041:767): pid=5695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:22.088540 sshd-session[5695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:40:22.094233 kernel: audit: type=1103 audit(1769038822.058:768): pid=5695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:22.094318 kernel: audit: type=1006 audit(1769038822.058:769): pid=5695 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 21 23:40:22.123068 kernel: audit: type=1300 audit(1769038822.058:769): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd5d2d2f0 a2=3 a3=0 items=0 ppid=1 pid=5695 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:22.058000 audit[5695]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd5d2d2f0 a2=3 a3=0 items=0 ppid=1 pid=5695 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:22.106889 systemd-logind[2082]: New session 11 of user core. Jan 21 23:40:22.058000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:40:22.139170 kernel: audit: type=1327 audit(1769038822.058:769): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:40:22.141355 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 21 23:40:22.145000 audit[5695]: USER_START pid=5695 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:22.169000 audit[5698]: CRED_ACQ pid=5698 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:22.189870 kernel: audit: type=1105 audit(1769038822.145:770): pid=5695 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:22.190000 kernel: audit: type=1103 audit(1769038822.169:771): pid=5698 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:22.401761 sshd[5698]: Connection closed by 10.200.16.10 port 42206 Jan 21 23:40:22.402337 sshd-session[5695]: pam_unix(sshd:session): session closed for user core Jan 21 23:40:22.402000 audit[5695]: USER_END pid=5695 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:22.407224 systemd[1]: sshd@8-10.200.20.29:22-10.200.16.10:42206.service: Deactivated successfully. Jan 21 23:40:22.412039 systemd[1]: session-11.scope: Deactivated successfully. Jan 21 23:40:22.414472 systemd-logind[2082]: Session 11 logged out. Waiting for processes to exit. Jan 21 23:40:22.416448 systemd-logind[2082]: Removed session 11. Jan 21 23:40:22.402000 audit[5695]: CRED_DISP pid=5695 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:22.439549 kernel: audit: type=1106 audit(1769038822.402:772): pid=5695 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:22.439680 kernel: audit: type=1104 audit(1769038822.402:773): pid=5695 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:22.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.29:22-10.200.16.10:42206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:27.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.29:22-10.200.16.10:42210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:27.495578 systemd[1]: Started sshd@9-10.200.20.29:22-10.200.16.10:42210.service - OpenSSH per-connection server daemon (10.200.16.10:42210). Jan 21 23:40:27.498736 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 23:40:27.498832 kernel: audit: type=1130 audit(1769038827.494:775): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.29:22-10.200.16.10:42210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:27.694677 containerd[2113]: time="2026-01-21T23:40:27.694634378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 23:40:27.935000 audit[5735]: USER_ACCT pid=5735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:27.937253 sshd[5735]: Accepted publickey for core from 10.200.16.10 port 42210 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:40:27.954082 containerd[2113]: time="2026-01-21T23:40:27.953952426Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:40:27.956099 kernel: audit: type=1101 audit(1769038827.935:776): pid=5735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:27.958085 containerd[2113]: time="2026-01-21T23:40:27.956990684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 23:40:27.958085 containerd[2113]: time="2026-01-21T23:40:27.957030094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 23:40:27.958634 kubelet[3545]: E0121 23:40:27.958546 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:40:27.958634 kubelet[3545]: E0121 23:40:27.958599 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:40:27.958000 audit[5735]: CRED_ACQ pid=5735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:27.959642 sshd-session[5735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:40:27.961393 kubelet[3545]: E0121 23:40:27.958735 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pszq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86f4fc7866-d746f_calico-apiserver(fe4222fd-b3e4-4022-8b44-793668b7e61d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 23:40:27.976464 kubelet[3545]: E0121 23:40:27.976333 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-d746f" podUID="fe4222fd-b3e4-4022-8b44-793668b7e61d" Jan 21 23:40:27.987065 kernel: audit: type=1103 audit(1769038827.958:777): pid=5735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:27.987192 kernel: audit: type=1006 audit(1769038827.958:778): pid=5735 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 21 23:40:27.958000 audit[5735]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc2aa9770 a2=3 a3=0 items=0 ppid=1 pid=5735 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:28.006336 kernel: audit: type=1300 audit(1769038827.958:778): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc2aa9770 a2=3 a3=0 items=0 ppid=1 pid=5735 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:27.958000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:40:28.014909 kernel: audit: type=1327 audit(1769038827.958:778): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:40:28.014490 systemd-logind[2082]: New session 12 of user core. Jan 21 23:40:28.017267 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 21 23:40:28.020000 audit[5735]: USER_START pid=5735 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:28.039000 audit[5738]: CRED_ACQ pid=5738 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:28.056482 kernel: audit: type=1105 audit(1769038828.020:779): pid=5735 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:28.056632 kernel: audit: type=1103 audit(1769038828.039:780): pid=5738 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:28.247973 sshd[5738]: Connection closed by 10.200.16.10 port 42210 Jan 21 23:40:28.248970 sshd-session[5735]: pam_unix(sshd:session): session closed for user core Jan 21 23:40:28.249000 audit[5735]: USER_END pid=5735 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:28.255860 systemd[1]: sshd@9-10.200.20.29:22-10.200.16.10:42210.service: Deactivated successfully. Jan 21 23:40:28.260320 systemd[1]: session-12.scope: Deactivated successfully. Jan 21 23:40:28.249000 audit[5735]: CRED_DISP pid=5735 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:28.289330 kernel: audit: type=1106 audit(1769038828.249:781): pid=5735 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:28.289423 kernel: audit: type=1104 audit(1769038828.249:782): pid=5735 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:28.288950 systemd-logind[2082]: Session 12 logged out. Waiting for processes to exit. Jan 21 23:40:28.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.29:22-10.200.16.10:42210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:28.290749 systemd-logind[2082]: Removed session 12. Jan 21 23:40:28.340132 systemd[1]: Started sshd@10-10.200.20.29:22-10.200.16.10:42214.service - OpenSSH per-connection server daemon (10.200.16.10:42214). Jan 21 23:40:28.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.29:22-10.200.16.10:42214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:28.695295 containerd[2113]: time="2026-01-21T23:40:28.695177255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 23:40:28.777000 audit[5751]: USER_ACCT pid=5751 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:28.780023 sshd[5751]: Accepted publickey for core from 10.200.16.10 port 42214 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:40:28.780000 audit[5751]: CRED_ACQ pid=5751 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:28.780000 audit[5751]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffca6b1180 a2=3 a3=0 items=0 ppid=1 pid=5751 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:28.780000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:40:28.782227 sshd-session[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:40:28.788418 systemd-logind[2082]: New session 13 of user core. Jan 21 23:40:28.794550 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 21 23:40:28.796000 audit[5751]: USER_START pid=5751 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:28.799000 audit[5755]: CRED_ACQ pid=5755 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:28.982074 containerd[2113]: time="2026-01-21T23:40:28.981951115Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:40:29.004541 containerd[2113]: time="2026-01-21T23:40:29.004447642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 23:40:29.004942 containerd[2113]: time="2026-01-21T23:40:29.004492820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 23:40:29.005140 kubelet[3545]: E0121 23:40:29.005101 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 23:40:29.005618 kubelet[3545]: E0121 23:40:29.005164 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 23:40:29.005618 kubelet[3545]: E0121 23:40:29.005392 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkjgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wsfxd_calico-system(40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 23:40:29.008058 kubelet[3545]: E0121 23:40:29.006811 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wsfxd" podUID="40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f" Jan 21 23:40:29.128755 sshd[5755]: Connection closed by 10.200.16.10 port 42214 Jan 21 23:40:29.131000 sshd-session[5751]: pam_unix(sshd:session): session closed for user core Jan 21 23:40:29.131000 audit[5751]: USER_END pid=5751 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:29.131000 audit[5751]: CRED_DISP pid=5751 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:29.135262 systemd[1]: sshd@10-10.200.20.29:22-10.200.16.10:42214.service: Deactivated successfully. Jan 21 23:40:29.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.29:22-10.200.16.10:42214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:29.138369 systemd[1]: session-13.scope: Deactivated successfully. Jan 21 23:40:29.140554 systemd-logind[2082]: Session 13 logged out. Waiting for processes to exit. Jan 21 23:40:29.142176 systemd-logind[2082]: Removed session 13. Jan 21 23:40:29.222036 systemd[1]: Started sshd@11-10.200.20.29:22-10.200.16.10:42218.service - OpenSSH per-connection server daemon (10.200.16.10:42218). Jan 21 23:40:29.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.29:22-10.200.16.10:42218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:29.649730 sshd[5765]: Accepted publickey for core from 10.200.16.10 port 42218 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:40:29.648000 audit[5765]: USER_ACCT pid=5765 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:29.649000 audit[5765]: CRED_ACQ pid=5765 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:29.651605 sshd-session[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:40:29.650000 audit[5765]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe2bd6af0 a2=3 a3=0 items=0 ppid=1 pid=5765 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:29.650000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:40:29.657185 systemd-logind[2082]: New session 14 of user core. Jan 21 23:40:29.664277 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 21 23:40:29.666000 audit[5765]: USER_START pid=5765 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:29.668000 audit[5768]: CRED_ACQ pid=5768 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:29.692469 containerd[2113]: time="2026-01-21T23:40:29.692319494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 23:40:29.944600 containerd[2113]: time="2026-01-21T23:40:29.944250782Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:40:29.964222 containerd[2113]: time="2026-01-21T23:40:29.964118417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 23:40:29.964222 containerd[2113]: time="2026-01-21T23:40:29.964121713Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 23:40:29.964442 kubelet[3545]: E0121 23:40:29.964403 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 23:40:29.964479 kubelet[3545]: E0121 23:40:29.964451 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 23:40:29.965195 kubelet[3545]: E0121 23:40:29.964563 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc6dl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vmwkp_calico-system(03532856-4a1c-4971-af49-0f675b6cbf1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 23:40:29.966141 sshd[5768]: Connection closed by 10.200.16.10 port 42218 Jan 21 23:40:29.966678 sshd-session[5765]: pam_unix(sshd:session): session closed for user core Jan 21 23:40:29.970627 containerd[2113]: time="2026-01-21T23:40:29.970602819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 23:40:29.967000 audit[5765]: USER_END pid=5765 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:29.967000 audit[5765]: CRED_DISP pid=5765 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:29.972651 systemd-logind[2082]: Session 14 logged out. Waiting for processes to exit. Jan 21 23:40:29.973439 systemd[1]: sshd@11-10.200.20.29:22-10.200.16.10:42218.service: Deactivated successfully. Jan 21 23:40:29.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.29:22-10.200.16.10:42218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:29.977920 systemd[1]: session-14.scope: Deactivated successfully. Jan 21 23:40:29.982249 systemd-logind[2082]: Removed session 14. Jan 21 23:40:30.353738 containerd[2113]: time="2026-01-21T23:40:30.353691143Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:40:30.356689 containerd[2113]: time="2026-01-21T23:40:30.356629758Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 23:40:30.357281 containerd[2113]: time="2026-01-21T23:40:30.356731897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 23:40:30.357358 kubelet[3545]: E0121 23:40:30.357222 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 23:40:30.357358 kubelet[3545]: E0121 23:40:30.357277 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 23:40:30.357884 kubelet[3545]: E0121 23:40:30.357399 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc6dl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vmwkp_calico-system(03532856-4a1c-4971-af49-0f675b6cbf1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 23:40:30.359167 kubelet[3545]: E0121 23:40:30.359107 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vmwkp" podUID="03532856-4a1c-4971-af49-0f675b6cbf1f" Jan 21 23:40:32.693650 containerd[2113]: time="2026-01-21T23:40:32.693358745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 23:40:32.960273 containerd[2113]: time="2026-01-21T23:40:32.960225681Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:40:32.963077 containerd[2113]: time="2026-01-21T23:40:32.962962016Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 23:40:32.963173 containerd[2113]: time="2026-01-21T23:40:32.963085501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 23:40:32.963370 kubelet[3545]: E0121 23:40:32.963320 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 23:40:32.963667 kubelet[3545]: E0121 23:40:32.963385 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 23:40:32.964665 kubelet[3545]: E0121 23:40:32.964197 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxhrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-84459bb977-fzglc_calico-system(b1e000bd-2ebe-4f78-af48-a2456035e42f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 23:40:32.965999 kubelet[3545]: E0121 23:40:32.965961 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84459bb977-fzglc" podUID="b1e000bd-2ebe-4f78-af48-a2456035e42f" Jan 21 23:40:33.691456 containerd[2113]: time="2026-01-21T23:40:33.691416576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 23:40:33.945153 containerd[2113]: time="2026-01-21T23:40:33.944912599Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:40:33.947804 containerd[2113]: time="2026-01-21T23:40:33.947758394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 23:40:33.947889 containerd[2113]: time="2026-01-21T23:40:33.947856573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 23:40:33.948194 kubelet[3545]: E0121 23:40:33.948130 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:40:33.948194 kubelet[3545]: E0121 23:40:33.948182 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:40:33.948474 kubelet[3545]: E0121 23:40:33.948431 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4r6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86f4fc7866-w9csf_calico-apiserver(67e69fef-e284-4866-bf16-ca5d0645fcac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 23:40:33.949736 kubelet[3545]: E0121 23:40:33.949689 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-w9csf" podUID="67e69fef-e284-4866-bf16-ca5d0645fcac" Jan 21 23:40:34.691653 kubelet[3545]: E0121 23:40:34.691609 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d9759fcd4-9gjqv" podUID="fd7f2311-8da9-446b-ab8a-7da03038d65b" Jan 21 23:40:35.052283 systemd[1]: Started sshd@12-10.200.20.29:22-10.200.16.10:41494.service - OpenSSH per-connection server daemon (10.200.16.10:41494). Jan 21 23:40:35.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.29:22-10.200.16.10:41494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:35.055978 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 21 23:40:35.056099 kernel: audit: type=1130 audit(1769038835.052:802): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.29:22-10.200.16.10:41494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:35.465569 sshd[5786]: Accepted publickey for core from 10.200.16.10 port 41494 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:40:35.465000 audit[5786]: USER_ACCT pid=5786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:35.472631 sshd-session[5786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:40:35.472000 audit[5786]: CRED_ACQ pid=5786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:35.498783 kernel: audit: type=1101 audit(1769038835.465:803): pid=5786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:35.498910 kernel: audit: type=1103 audit(1769038835.472:804): pid=5786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:35.509189 kernel: audit: type=1006 audit(1769038835.472:805): pid=5786 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 21 23:40:35.472000 audit[5786]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc3b40c60 a2=3 a3=0 items=0 ppid=1 pid=5786 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:35.526267 kernel: audit: type=1300 audit(1769038835.472:805): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc3b40c60 a2=3 a3=0 items=0 ppid=1 pid=5786 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:35.472000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:40:35.533188 kernel: audit: type=1327 audit(1769038835.472:805): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:40:35.536694 systemd-logind[2082]: New session 15 of user core. Jan 21 23:40:35.546243 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 21 23:40:35.548000 audit[5786]: USER_START pid=5786 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:35.569799 kernel: audit: type=1105 audit(1769038835.548:806): pid=5786 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:35.569915 kernel: audit: type=1103 audit(1769038835.569:807): pid=5789 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:35.569000 audit[5789]: CRED_ACQ pid=5789 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:35.760255 sshd[5789]: Connection closed by 10.200.16.10 port 41494 Jan 21 23:40:35.760820 sshd-session[5786]: pam_unix(sshd:session): session closed for user core Jan 21 23:40:35.762000 audit[5786]: USER_END pid=5786 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:35.764000 audit[5786]: CRED_DISP pid=5786 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:35.797184 kernel: audit: type=1106 audit(1769038835.762:808): pid=5786 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:35.797315 kernel: audit: type=1104 audit(1769038835.764:809): pid=5786 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:35.783295 systemd[1]: sshd@12-10.200.20.29:22-10.200.16.10:41494.service: Deactivated successfully. Jan 21 23:40:35.786459 systemd[1]: session-15.scope: Deactivated successfully. Jan 21 23:40:35.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.29:22-10.200.16.10:41494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:35.801188 systemd-logind[2082]: Session 15 logged out. Waiting for processes to exit. Jan 21 23:40:35.803231 systemd-logind[2082]: Removed session 15. Jan 21 23:40:39.692136 kubelet[3545]: E0121 23:40:39.691871 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-d746f" podUID="fe4222fd-b3e4-4022-8b44-793668b7e61d" Jan 21 23:40:40.851035 systemd[1]: Started sshd@13-10.200.20.29:22-10.200.16.10:45016.service - OpenSSH per-connection server daemon (10.200.16.10:45016). Jan 21 23:40:40.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.29:22-10.200.16.10:45016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:40.855987 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 23:40:40.856087 kernel: audit: type=1130 audit(1769038840.851:811): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.29:22-10.200.16.10:45016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:41.292075 sshd[5801]: Accepted publickey for core from 10.200.16.10 port 45016 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:40:41.290000 audit[5801]: USER_ACCT pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:41.310553 sshd-session[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:40:41.309000 audit[5801]: CRED_ACQ pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:41.328618 kernel: audit: type=1101 audit(1769038841.290:812): pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:41.328716 kernel: audit: type=1103 audit(1769038841.309:813): pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:41.340955 kernel: audit: type=1006 audit(1769038841.309:814): pid=5801 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 21 23:40:41.309000 audit[5801]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff0406450 a2=3 a3=0 items=0 ppid=1 pid=5801 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:41.359360 kernel: audit: type=1300 audit(1769038841.309:814): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff0406450 a2=3 a3=0 items=0 ppid=1 pid=5801 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:41.360231 systemd-logind[2082]: New session 16 of user core. Jan 21 23:40:41.309000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:40:41.368145 kernel: audit: type=1327 audit(1769038841.309:814): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:40:41.370297 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 21 23:40:41.372000 audit[5801]: USER_START pid=5801 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:41.374000 audit[5806]: CRED_ACQ pid=5806 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:41.409168 kernel: audit: type=1105 audit(1769038841.372:815): pid=5801 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:41.409277 kernel: audit: type=1103 audit(1769038841.374:816): pid=5806 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:41.615251 sshd[5806]: Connection closed by 10.200.16.10 port 45016 Jan 21 23:40:41.614589 sshd-session[5801]: pam_unix(sshd:session): session closed for user core Jan 21 23:40:41.614000 audit[5801]: USER_END pid=5801 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:41.622341 systemd[1]: session-16.scope: Deactivated successfully. Jan 21 23:40:41.623817 systemd[1]: sshd@13-10.200.20.29:22-10.200.16.10:45016.service: Deactivated successfully. Jan 21 23:40:41.616000 audit[5801]: CRED_DISP pid=5801 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:41.653090 kernel: audit: type=1106 audit(1769038841.614:817): pid=5801 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:41.653225 kernel: audit: type=1104 audit(1769038841.616:818): pid=5801 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:41.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.29:22-10.200.16.10:45016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:41.655293 systemd-logind[2082]: Session 16 logged out. Waiting for processes to exit. Jan 21 23:40:41.656576 systemd-logind[2082]: Removed session 16. Jan 21 23:40:43.690973 kubelet[3545]: E0121 23:40:43.690917 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wsfxd" podUID="40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f" Jan 21 23:40:43.692712 kubelet[3545]: E0121 23:40:43.692681 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vmwkp" podUID="03532856-4a1c-4971-af49-0f675b6cbf1f" Jan 21 23:40:45.691606 kubelet[3545]: E0121 23:40:45.691546 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d9759fcd4-9gjqv" podUID="fd7f2311-8da9-446b-ab8a-7da03038d65b" Jan 21 23:40:45.692530 kubelet[3545]: E0121 23:40:45.691642 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84459bb977-fzglc" podUID="b1e000bd-2ebe-4f78-af48-a2456035e42f" Jan 21 23:40:46.708129 systemd[1]: Started sshd@14-10.200.20.29:22-10.200.16.10:45018.service - OpenSSH per-connection server daemon (10.200.16.10:45018). Jan 21 23:40:46.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.29:22-10.200.16.10:45018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:46.712685 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 23:40:46.712782 kernel: audit: type=1130 audit(1769038846.707:820): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.29:22-10.200.16.10:45018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:47.144000 audit[5817]: USER_ACCT pid=5817 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:47.145497 sshd[5817]: Accepted publickey for core from 10.200.16.10 port 45018 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:40:47.162000 audit[5817]: CRED_ACQ pid=5817 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:47.165144 sshd-session[5817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:40:47.178463 kernel: audit: type=1101 audit(1769038847.144:821): pid=5817 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:47.178576 kernel: audit: type=1103 audit(1769038847.162:822): pid=5817 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:47.189624 kernel: audit: type=1006 audit(1769038847.162:823): pid=5817 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 21 23:40:47.162000 audit[5817]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd7bbfd10 a2=3 a3=0 items=0 ppid=1 pid=5817 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:47.206866 kernel: audit: type=1300 audit(1769038847.162:823): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd7bbfd10 a2=3 a3=0 items=0 ppid=1 pid=5817 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:47.162000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:40:47.214835 kernel: audit: type=1327 audit(1769038847.162:823): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:40:47.219643 systemd-logind[2082]: New session 17 of user core. Jan 21 23:40:47.223228 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 21 23:40:47.224000 audit[5817]: USER_START pid=5817 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:47.243000 audit[5820]: CRED_ACQ pid=5820 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:47.259064 kernel: audit: type=1105 audit(1769038847.224:824): pid=5817 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:47.259857 kernel: audit: type=1103 audit(1769038847.243:825): pid=5820 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:47.449222 sshd[5820]: Connection closed by 10.200.16.10 port 45018 Jan 21 23:40:47.450205 sshd-session[5817]: pam_unix(sshd:session): session closed for user core Jan 21 23:40:47.451000 audit[5817]: USER_END pid=5817 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:47.458446 systemd[1]: sshd@14-10.200.20.29:22-10.200.16.10:45018.service: Deactivated successfully. Jan 21 23:40:47.460439 systemd[1]: session-17.scope: Deactivated successfully. Jan 21 23:40:47.475113 systemd-logind[2082]: Session 17 logged out. Waiting for processes to exit. Jan 21 23:40:47.476228 systemd-logind[2082]: Removed session 17. Jan 21 23:40:47.451000 audit[5817]: CRED_DISP pid=5817 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:47.491679 kernel: audit: type=1106 audit(1769038847.451:826): pid=5817 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:47.491813 kernel: audit: type=1104 audit(1769038847.451:827): pid=5817 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:47.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.29:22-10.200.16.10:45018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:47.692568 kubelet[3545]: E0121 23:40:47.692531 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-w9csf" podUID="67e69fef-e284-4866-bf16-ca5d0645fcac" Jan 21 23:40:51.690827 kubelet[3545]: E0121 23:40:51.690523 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-d746f" podUID="fe4222fd-b3e4-4022-8b44-793668b7e61d" Jan 21 23:40:52.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.29:22-10.200.16.10:38042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:52.539168 systemd[1]: Started sshd@15-10.200.20.29:22-10.200.16.10:38042.service - OpenSSH per-connection server daemon (10.200.16.10:38042). Jan 21 23:40:52.543062 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 23:40:52.543165 kernel: audit: type=1130 audit(1769038852.538:829): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.29:22-10.200.16.10:38042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:52.976000 audit[5834]: USER_ACCT pid=5834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:52.995496 sshd-session[5834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:40:52.994000 audit[5834]: CRED_ACQ pid=5834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:52.996655 sshd[5834]: Accepted publickey for core from 10.200.16.10 port 38042 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:40:53.010983 kernel: audit: type=1101 audit(1769038852.976:830): pid=5834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:53.011108 kernel: audit: type=1103 audit(1769038852.994:831): pid=5834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:53.016113 systemd-logind[2082]: New session 18 of user core. Jan 21 23:40:53.027787 kernel: audit: type=1006 audit(1769038852.994:832): pid=5834 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 21 23:40:52.994000 audit[5834]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdbc76370 a2=3 a3=0 items=0 ppid=1 pid=5834 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:53.045353 kernel: audit: type=1300 audit(1769038852.994:832): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdbc76370 a2=3 a3=0 items=0 ppid=1 pid=5834 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:53.047303 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 21 23:40:52.994000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:40:53.055868 kernel: audit: type=1327 audit(1769038852.994:832): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:40:53.051000 audit[5834]: USER_START pid=5834 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:53.076198 kernel: audit: type=1105 audit(1769038853.051:833): pid=5834 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:53.056000 audit[5837]: CRED_ACQ pid=5837 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:53.091720 kernel: audit: type=1103 audit(1769038853.056:834): pid=5837 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:53.259608 sshd[5837]: Connection closed by 10.200.16.10 port 38042 Jan 21 23:40:53.259885 sshd-session[5834]: pam_unix(sshd:session): session closed for user core Jan 21 23:40:53.260000 audit[5834]: USER_END pid=5834 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:53.263704 systemd-logind[2082]: Session 18 logged out. Waiting for processes to exit. Jan 21 23:40:53.265424 systemd[1]: sshd@15-10.200.20.29:22-10.200.16.10:38042.service: Deactivated successfully. Jan 21 23:40:53.267882 systemd[1]: session-18.scope: Deactivated successfully. Jan 21 23:40:53.269751 systemd-logind[2082]: Removed session 18. Jan 21 23:40:53.260000 audit[5834]: CRED_DISP pid=5834 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:53.295334 kernel: audit: type=1106 audit(1769038853.260:835): pid=5834 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:53.295434 kernel: audit: type=1104 audit(1769038853.260:836): pid=5834 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:53.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.29:22-10.200.16.10:38042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:53.348429 systemd[1]: Started sshd@16-10.200.20.29:22-10.200.16.10:38058.service - OpenSSH per-connection server daemon (10.200.16.10:38058). Jan 21 23:40:53.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.29:22-10.200.16.10:38058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:53.772000 audit[5848]: USER_ACCT pid=5848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:53.773614 sshd[5848]: Accepted publickey for core from 10.200.16.10 port 38058 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:40:53.773000 audit[5848]: CRED_ACQ pid=5848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:53.773000 audit[5848]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdfa2c190 a2=3 a3=0 items=0 ppid=1 pid=5848 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:53.773000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:40:53.775211 sshd-session[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:40:53.779377 systemd-logind[2082]: New session 19 of user core. Jan 21 23:40:53.785393 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 21 23:40:53.786000 audit[5848]: USER_START pid=5848 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:53.788000 audit[5877]: CRED_ACQ pid=5877 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:54.171991 sshd[5877]: Connection closed by 10.200.16.10 port 38058 Jan 21 23:40:54.173915 sshd-session[5848]: pam_unix(sshd:session): session closed for user core Jan 21 23:40:54.174000 audit[5848]: USER_END pid=5848 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:54.174000 audit[5848]: CRED_DISP pid=5848 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:54.177351 systemd[1]: sshd@16-10.200.20.29:22-10.200.16.10:38058.service: Deactivated successfully. Jan 21 23:40:54.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.29:22-10.200.16.10:38058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:54.183256 systemd[1]: session-19.scope: Deactivated successfully. Jan 21 23:40:54.185627 systemd-logind[2082]: Session 19 logged out. Waiting for processes to exit. Jan 21 23:40:54.186737 systemd-logind[2082]: Removed session 19. Jan 21 23:40:54.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.29:22-10.200.16.10:38062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:54.262215 systemd[1]: Started sshd@17-10.200.20.29:22-10.200.16.10:38062.service - OpenSSH per-connection server daemon (10.200.16.10:38062). Jan 21 23:40:54.683000 audit[5887]: USER_ACCT pid=5887 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:54.684869 sshd[5887]: Accepted publickey for core from 10.200.16.10 port 38062 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:40:54.684000 audit[5887]: CRED_ACQ pid=5887 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:54.684000 audit[5887]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd5492500 a2=3 a3=0 items=0 ppid=1 pid=5887 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:54.684000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:40:54.687105 sshd-session[5887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:40:54.695092 kubelet[3545]: E0121 23:40:54.694481 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vmwkp" podUID="03532856-4a1c-4971-af49-0f675b6cbf1f" Jan 21 23:40:54.694585 systemd-logind[2082]: New session 20 of user core. Jan 21 23:40:54.696193 kubelet[3545]: E0121 23:40:54.694105 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wsfxd" podUID="40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f" Jan 21 23:40:54.699231 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 21 23:40:54.701000 audit[5887]: USER_START pid=5887 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:54.704000 audit[5890]: CRED_ACQ pid=5890 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:55.249000 audit[5905]: NETFILTER_CFG table=filter:145 family=2 entries=26 op=nft_register_rule pid=5905 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:40:55.249000 audit[5905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffe4d29200 a2=0 a3=1 items=0 ppid=3761 pid=5905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:55.249000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:40:55.255000 audit[5905]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5905 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:40:55.255000 audit[5905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe4d29200 a2=0 a3=1 items=0 ppid=3761 pid=5905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:55.255000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:40:55.267000 audit[5907]: NETFILTER_CFG table=filter:147 family=2 entries=38 op=nft_register_rule pid=5907 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:40:55.267000 audit[5907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffdb31ade0 a2=0 a3=1 items=0 ppid=3761 pid=5907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:55.267000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:40:55.273000 audit[5907]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=5907 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:40:55.273000 audit[5907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdb31ade0 a2=0 a3=1 items=0 ppid=3761 pid=5907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:55.273000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:40:55.339245 sshd[5890]: Connection closed by 10.200.16.10 port 38062 Jan 21 23:40:55.340117 sshd-session[5887]: pam_unix(sshd:session): session closed for user core Jan 21 23:40:55.341000 audit[5887]: USER_END pid=5887 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:55.341000 audit[5887]: CRED_DISP pid=5887 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:55.344852 systemd[1]: sshd@17-10.200.20.29:22-10.200.16.10:38062.service: Deactivated successfully. Jan 21 23:40:55.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.29:22-10.200.16.10:38062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:55.346964 systemd[1]: session-20.scope: Deactivated successfully. Jan 21 23:40:55.347754 systemd-logind[2082]: Session 20 logged out. Waiting for processes to exit. Jan 21 23:40:55.350297 systemd-logind[2082]: Removed session 20. Jan 21 23:40:55.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.29:22-10.200.16.10:38066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:55.420302 systemd[1]: Started sshd@18-10.200.20.29:22-10.200.16.10:38066.service - OpenSSH per-connection server daemon (10.200.16.10:38066). Jan 21 23:40:55.810000 audit[5912]: USER_ACCT pid=5912 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:55.811924 sshd[5912]: Accepted publickey for core from 10.200.16.10 port 38066 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:40:55.811000 audit[5912]: CRED_ACQ pid=5912 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:55.811000 audit[5912]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff61c8d10 a2=3 a3=0 items=0 ppid=1 pid=5912 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:55.811000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:40:55.813032 sshd-session[5912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:40:55.821150 systemd-logind[2082]: New session 21 of user core. Jan 21 23:40:55.828193 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 21 23:40:55.830000 audit[5912]: USER_START pid=5912 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:55.832000 audit[5915]: CRED_ACQ pid=5915 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:56.225330 sshd[5915]: Connection closed by 10.200.16.10 port 38066 Jan 21 23:40:56.243790 sshd-session[5912]: pam_unix(sshd:session): session closed for user core Jan 21 23:40:56.244000 audit[5912]: USER_END pid=5912 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:56.244000 audit[5912]: CRED_DISP pid=5912 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:56.248563 systemd[1]: sshd@18-10.200.20.29:22-10.200.16.10:38066.service: Deactivated successfully. Jan 21 23:40:56.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.29:22-10.200.16.10:38066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:56.252655 systemd[1]: session-21.scope: Deactivated successfully. Jan 21 23:40:56.254823 systemd-logind[2082]: Session 21 logged out. Waiting for processes to exit. Jan 21 23:40:56.256794 systemd-logind[2082]: Removed session 21. Jan 21 23:40:56.312444 systemd[1]: Started sshd@19-10.200.20.29:22-10.200.16.10:38072.service - OpenSSH per-connection server daemon (10.200.16.10:38072). Jan 21 23:40:56.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.29:22-10.200.16.10:38072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:56.738000 audit[5925]: USER_ACCT pid=5925 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:56.740143 sshd[5925]: Accepted publickey for core from 10.200.16.10 port 38072 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:40:56.740000 audit[5925]: CRED_ACQ pid=5925 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:56.740000 audit[5925]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcad6c400 a2=3 a3=0 items=0 ppid=1 pid=5925 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:56.740000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:40:56.741640 sshd-session[5925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:40:56.747707 systemd-logind[2082]: New session 22 of user core. Jan 21 23:40:56.752603 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 21 23:40:56.754000 audit[5925]: USER_START pid=5925 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:56.756000 audit[5928]: CRED_ACQ pid=5928 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:57.035164 sshd[5928]: Connection closed by 10.200.16.10 port 38072 Jan 21 23:40:57.035414 sshd-session[5925]: pam_unix(sshd:session): session closed for user core Jan 21 23:40:57.036000 audit[5925]: USER_END pid=5925 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:57.036000 audit[5925]: CRED_DISP pid=5925 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:40:57.040273 systemd-logind[2082]: Session 22 logged out. Waiting for processes to exit. Jan 21 23:40:57.040901 systemd[1]: sshd@19-10.200.20.29:22-10.200.16.10:38072.service: Deactivated successfully. Jan 21 23:40:57.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.29:22-10.200.16.10:38072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:40:57.042854 systemd[1]: session-22.scope: Deactivated successfully. Jan 21 23:40:57.044950 systemd-logind[2082]: Removed session 22. Jan 21 23:40:58.692628 kubelet[3545]: E0121 23:40:58.692457 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84459bb977-fzglc" podUID="b1e000bd-2ebe-4f78-af48-a2456035e42f" Jan 21 23:40:58.692628 kubelet[3545]: E0121 23:40:58.692574 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d9759fcd4-9gjqv" podUID="fd7f2311-8da9-446b-ab8a-7da03038d65b" Jan 21 23:40:59.272000 audit[5941]: NETFILTER_CFG table=filter:149 family=2 entries=26 op=nft_register_rule pid=5941 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:40:59.276891 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 21 23:40:59.277004 kernel: audit: type=1325 audit(1769038859.272:878): table=filter:149 family=2 entries=26 op=nft_register_rule pid=5941 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:40:59.272000 audit[5941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc0c833f0 a2=0 a3=1 items=0 ppid=3761 pid=5941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:59.338033 kernel: audit: type=1300 audit(1769038859.272:878): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc0c833f0 a2=0 a3=1 items=0 ppid=3761 pid=5941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:59.272000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:40:59.349551 kernel: audit: type=1327 audit(1769038859.272:878): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:40:59.339000 audit[5941]: NETFILTER_CFG table=nat:150 family=2 entries=104 op=nft_register_chain pid=5941 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:40:59.359914 kernel: audit: type=1325 audit(1769038859.339:879): table=nat:150 family=2 entries=104 op=nft_register_chain pid=5941 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:40:59.339000 audit[5941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffc0c833f0 a2=0 a3=1 items=0 ppid=3761 pid=5941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:59.379060 kernel: audit: type=1300 audit(1769038859.339:879): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffc0c833f0 a2=0 a3=1 items=0 ppid=3761 pid=5941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:40:59.339000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:40:59.388877 kernel: audit: type=1327 audit(1769038859.339:879): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:40:59.692019 kubelet[3545]: E0121 23:40:59.691906 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-w9csf" podUID="67e69fef-e284-4866-bf16-ca5d0645fcac" Jan 21 23:41:02.143427 systemd[1]: Started sshd@20-10.200.20.29:22-10.200.16.10:48382.service - OpenSSH per-connection server daemon (10.200.16.10:48382). Jan 21 23:41:02.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.29:22-10.200.16.10:48382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:41:02.163080 kernel: audit: type=1130 audit(1769038862.143:880): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.29:22-10.200.16.10:48382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:41:02.574000 audit[5950]: USER_ACCT pid=5950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:02.583163 sshd[5950]: Accepted publickey for core from 10.200.16.10 port 48382 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:41:02.593658 kernel: audit: type=1101 audit(1769038862.574:881): pid=5950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:02.593752 kernel: audit: type=1103 audit(1769038862.592:882): pid=5950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:02.592000 audit[5950]: CRED_ACQ pid=5950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:02.593517 sshd-session[5950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:41:02.598308 systemd-logind[2082]: New session 23 of user core. Jan 21 23:41:02.617754 kernel: audit: type=1006 audit(1769038862.592:883): pid=5950 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 21 23:41:02.592000 audit[5950]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe1d21bd0 a2=3 a3=0 items=0 ppid=1 pid=5950 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:41:02.592000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:41:02.619336 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 21 23:41:02.621000 audit[5950]: USER_START pid=5950 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:02.623000 audit[5953]: CRED_ACQ pid=5953 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:02.876034 sshd[5953]: Connection closed by 10.200.16.10 port 48382 Jan 21 23:41:02.876551 sshd-session[5950]: pam_unix(sshd:session): session closed for user core Jan 21 23:41:02.877000 audit[5950]: USER_END pid=5950 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:02.877000 audit[5950]: CRED_DISP pid=5950 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:02.879950 systemd[1]: sshd@20-10.200.20.29:22-10.200.16.10:48382.service: Deactivated successfully. Jan 21 23:41:02.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.29:22-10.200.16.10:48382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:41:02.882145 systemd[1]: session-23.scope: Deactivated successfully. Jan 21 23:41:02.884003 systemd-logind[2082]: Session 23 logged out. Waiting for processes to exit. Jan 21 23:41:02.886404 systemd-logind[2082]: Removed session 23. Jan 21 23:41:03.691699 kubelet[3545]: E0121 23:41:03.691024 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-d746f" podUID="fe4222fd-b3e4-4022-8b44-793668b7e61d" Jan 21 23:41:06.693614 kubelet[3545]: E0121 23:41:06.692866 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vmwkp" podUID="03532856-4a1c-4971-af49-0f675b6cbf1f" Jan 21 23:41:07.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.29:22-10.200.16.10:48390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:41:07.963287 systemd[1]: Started sshd@21-10.200.20.29:22-10.200.16.10:48390.service - OpenSSH per-connection server daemon (10.200.16.10:48390). Jan 21 23:41:07.966414 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 21 23:41:07.966481 kernel: audit: type=1130 audit(1769038867.962:889): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.29:22-10.200.16.10:48390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:41:08.363000 audit[5967]: USER_ACCT pid=5967 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:08.381338 sshd[5967]: Accepted publickey for core from 10.200.16.10 port 48390 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:41:08.383758 sshd-session[5967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:41:08.382000 audit[5967]: CRED_ACQ pid=5967 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:08.389456 systemd-logind[2082]: New session 24 of user core. Jan 21 23:41:08.402749 kernel: audit: type=1101 audit(1769038868.363:890): pid=5967 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:08.402819 kernel: audit: type=1103 audit(1769038868.382:891): pid=5967 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:08.405212 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 21 23:41:08.418037 kernel: audit: type=1006 audit(1769038868.382:892): pid=5967 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 21 23:41:08.382000 audit[5967]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff3209390 a2=3 a3=0 items=0 ppid=1 pid=5967 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:41:08.435363 kernel: audit: type=1300 audit(1769038868.382:892): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff3209390 a2=3 a3=0 items=0 ppid=1 pid=5967 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:41:08.382000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:41:08.443067 kernel: audit: type=1327 audit(1769038868.382:892): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:41:08.407000 audit[5967]: USER_START pid=5967 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:08.462211 kernel: audit: type=1105 audit(1769038868.407:893): pid=5967 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:08.462339 kernel: audit: type=1103 audit(1769038868.408:894): pid=5970 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:08.408000 audit[5970]: CRED_ACQ pid=5970 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:08.632604 sshd[5970]: Connection closed by 10.200.16.10 port 48390 Jan 21 23:41:08.634093 sshd-session[5967]: pam_unix(sshd:session): session closed for user core Jan 21 23:41:08.635000 audit[5967]: USER_END pid=5967 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:08.640657 systemd-logind[2082]: Session 24 logged out. Waiting for processes to exit. Jan 21 23:41:08.641187 systemd[1]: sshd@21-10.200.20.29:22-10.200.16.10:48390.service: Deactivated successfully. Jan 21 23:41:08.643449 systemd[1]: session-24.scope: Deactivated successfully. Jan 21 23:41:08.646483 systemd-logind[2082]: Removed session 24. Jan 21 23:41:08.635000 audit[5967]: CRED_DISP pid=5967 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:08.677066 kernel: audit: type=1106 audit(1769038868.635:895): pid=5967 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:08.677167 kernel: audit: type=1104 audit(1769038868.635:896): pid=5967 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:08.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.29:22-10.200.16.10:48390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:41:09.692069 containerd[2113]: time="2026-01-21T23:41:09.691810607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 23:41:09.692782 kubelet[3545]: E0121 23:41:09.692642 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84459bb977-fzglc" podUID="b1e000bd-2ebe-4f78-af48-a2456035e42f" Jan 21 23:41:09.993675 containerd[2113]: time="2026-01-21T23:41:09.993494761Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:41:09.996542 containerd[2113]: time="2026-01-21T23:41:09.996184385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 23:41:09.996542 containerd[2113]: time="2026-01-21T23:41:09.996239995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 23:41:09.996995 kubelet[3545]: E0121 23:41:09.996941 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 23:41:09.997069 kubelet[3545]: E0121 23:41:09.997001 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 23:41:09.997205 kubelet[3545]: E0121 23:41:09.997167 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkjgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wsfxd_calico-system(40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 23:41:10.000065 kubelet[3545]: E0121 23:41:09.998587 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wsfxd" podUID="40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f" Jan 21 23:41:11.690922 containerd[2113]: time="2026-01-21T23:41:11.690813154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 23:41:11.934840 containerd[2113]: time="2026-01-21T23:41:11.934649578Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:41:11.937423 containerd[2113]: time="2026-01-21T23:41:11.937396716Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 23:41:11.937635 containerd[2113]: time="2026-01-21T23:41:11.937453934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 23:41:11.937850 kubelet[3545]: E0121 23:41:11.937781 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 23:41:11.937850 kubelet[3545]: E0121 23:41:11.937833 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 23:41:11.938594 kubelet[3545]: E0121 23:41:11.938557 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:34f3459d8c144933a66ddd93a201138a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xjhg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d9759fcd4-9gjqv_calico-system(fd7f2311-8da9-446b-ab8a-7da03038d65b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 23:41:11.941648 containerd[2113]: time="2026-01-21T23:41:11.941531263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 23:41:12.191541 containerd[2113]: time="2026-01-21T23:41:12.191492136Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:41:12.194885 containerd[2113]: time="2026-01-21T23:41:12.194277507Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 23:41:12.194885 containerd[2113]: time="2026-01-21T23:41:12.194302148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 23:41:12.194968 kubelet[3545]: E0121 23:41:12.194494 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 23:41:12.194968 kubelet[3545]: E0121 23:41:12.194537 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 23:41:12.194968 kubelet[3545]: E0121 23:41:12.194645 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xjhg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d9759fcd4-9gjqv_calico-system(fd7f2311-8da9-446b-ab8a-7da03038d65b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 23:41:12.195789 kubelet[3545]: E0121 23:41:12.195764 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d9759fcd4-9gjqv" podUID="fd7f2311-8da9-446b-ab8a-7da03038d65b" Jan 21 23:41:13.691156 kubelet[3545]: E0121 23:41:13.691116 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-w9csf" podUID="67e69fef-e284-4866-bf16-ca5d0645fcac" Jan 21 23:41:13.721659 systemd[1]: Started sshd@22-10.200.20.29:22-10.200.16.10:47138.service - OpenSSH per-connection server daemon (10.200.16.10:47138). Jan 21 23:41:13.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.29:22-10.200.16.10:47138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:41:13.724991 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 23:41:13.725446 kernel: audit: type=1130 audit(1769038873.720:898): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.29:22-10.200.16.10:47138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:41:14.156000 audit[5988]: USER_ACCT pid=5988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:14.174784 sshd[5988]: Accepted publickey for core from 10.200.16.10 port 47138 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:41:14.174932 sshd-session[5988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:41:14.173000 audit[5988]: CRED_ACQ pid=5988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:14.191276 kernel: audit: type=1101 audit(1769038874.156:899): pid=5988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:14.191353 kernel: audit: type=1103 audit(1769038874.173:900): pid=5988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:14.180280 systemd-logind[2082]: New session 25 of user core. Jan 21 23:41:14.200673 kernel: audit: type=1006 audit(1769038874.173:901): pid=5988 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 21 23:41:14.173000 audit[5988]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd63e4060 a2=3 a3=0 items=0 ppid=1 pid=5988 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:41:14.201236 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 21 23:41:14.217977 kernel: audit: type=1300 audit(1769038874.173:901): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd63e4060 a2=3 a3=0 items=0 ppid=1 pid=5988 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:41:14.173000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:41:14.224875 kernel: audit: type=1327 audit(1769038874.173:901): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:41:14.205000 audit[5988]: USER_START pid=5988 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:14.243352 kernel: audit: type=1105 audit(1769038874.205:902): pid=5988 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:14.208000 audit[5991]: CRED_ACQ pid=5991 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:14.258643 kernel: audit: type=1103 audit(1769038874.208:903): pid=5991 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:14.443388 sshd[5991]: Connection closed by 10.200.16.10 port 47138 Jan 21 23:41:14.443692 sshd-session[5988]: pam_unix(sshd:session): session closed for user core Jan 21 23:41:14.443000 audit[5988]: USER_END pid=5988 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:14.446653 systemd-logind[2082]: Session 25 logged out. Waiting for processes to exit. Jan 21 23:41:14.448040 systemd[1]: sshd@22-10.200.20.29:22-10.200.16.10:47138.service: Deactivated successfully. Jan 21 23:41:14.455231 systemd[1]: session-25.scope: Deactivated successfully. Jan 21 23:41:14.458433 systemd-logind[2082]: Removed session 25. Jan 21 23:41:14.443000 audit[5988]: CRED_DISP pid=5988 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:14.480164 kernel: audit: type=1106 audit(1769038874.443:904): pid=5988 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:14.480248 kernel: audit: type=1104 audit(1769038874.443:905): pid=5988 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:14.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.29:22-10.200.16.10:47138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:41:15.690866 containerd[2113]: time="2026-01-21T23:41:15.690435868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 23:41:15.915978 containerd[2113]: time="2026-01-21T23:41:15.915928908Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:41:15.918725 containerd[2113]: time="2026-01-21T23:41:15.918670301Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 23:41:15.918914 containerd[2113]: time="2026-01-21T23:41:15.918699302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 23:41:15.918986 kubelet[3545]: E0121 23:41:15.918942 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:41:15.919680 kubelet[3545]: E0121 23:41:15.918998 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:41:15.919680 kubelet[3545]: E0121 23:41:15.919137 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pszq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86f4fc7866-d746f_calico-apiserver(fe4222fd-b3e4-4022-8b44-793668b7e61d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 23:41:15.921301 kubelet[3545]: E0121 23:41:15.921184 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-d746f" podUID="fe4222fd-b3e4-4022-8b44-793668b7e61d" Jan 21 23:41:19.531674 systemd[1]: Started sshd@23-10.200.20.29:22-10.200.16.10:37038.service - OpenSSH per-connection server daemon (10.200.16.10:37038). Jan 21 23:41:19.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.29:22-10.200.16.10:37038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:41:19.535238 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 23:41:19.535297 kernel: audit: type=1130 audit(1769038879.530:907): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.29:22-10.200.16.10:37038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:41:19.690863 containerd[2113]: time="2026-01-21T23:41:19.690825789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 23:41:19.953084 containerd[2113]: time="2026-01-21T23:41:19.952833543Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:41:19.956177 containerd[2113]: time="2026-01-21T23:41:19.956134707Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 23:41:19.956269 containerd[2113]: time="2026-01-21T23:41:19.956221046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 23:41:19.956805 kubelet[3545]: E0121 23:41:19.956498 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 23:41:19.956805 kubelet[3545]: E0121 23:41:19.956546 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 23:41:19.956805 kubelet[3545]: E0121 23:41:19.956657 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc6dl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vmwkp_calico-system(03532856-4a1c-4971-af49-0f675b6cbf1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 23:41:19.958999 containerd[2113]: time="2026-01-21T23:41:19.958936246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 23:41:19.973000 audit[6010]: USER_ACCT pid=6010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:19.975094 sshd[6010]: Accepted publickey for core from 10.200.16.10 port 37038 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:41:19.991000 audit[6010]: CRED_ACQ pid=6010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:20.006912 kernel: audit: type=1101 audit(1769038879.973:908): pid=6010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:20.006983 kernel: audit: type=1103 audit(1769038879.991:909): pid=6010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:20.007645 sshd-session[6010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:41:20.015778 systemd-logind[2082]: New session 26 of user core. Jan 21 23:41:20.017355 kernel: audit: type=1006 audit(1769038879.991:910): pid=6010 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 21 23:41:19.991000 audit[6010]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc33b5fe0 a2=3 a3=0 items=0 ppid=1 pid=6010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:41:20.033999 kernel: audit: type=1300 audit(1769038879.991:910): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc33b5fe0 a2=3 a3=0 items=0 ppid=1 pid=6010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:41:19.991000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:41:20.041608 kernel: audit: type=1327 audit(1769038879.991:910): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:41:20.042211 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 21 23:41:20.045000 audit[6010]: USER_START pid=6010 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:20.047000 audit[6027]: CRED_ACQ pid=6027 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:20.080709 kernel: audit: type=1105 audit(1769038880.045:911): pid=6010 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:20.080784 kernel: audit: type=1103 audit(1769038880.047:912): pid=6027 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:20.259576 containerd[2113]: time="2026-01-21T23:41:20.259375142Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:41:20.262167 containerd[2113]: time="2026-01-21T23:41:20.262069877Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 23:41:20.262167 containerd[2113]: time="2026-01-21T23:41:20.262119495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 23:41:20.262438 kubelet[3545]: E0121 23:41:20.262388 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 23:41:20.262491 kubelet[3545]: E0121 23:41:20.262443 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 23:41:20.262588 kubelet[3545]: E0121 23:41:20.262552 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc6dl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vmwkp_calico-system(03532856-4a1c-4971-af49-0f675b6cbf1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 23:41:20.263844 kubelet[3545]: E0121 23:41:20.263801 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vmwkp" podUID="03532856-4a1c-4971-af49-0f675b6cbf1f" Jan 21 23:41:20.284833 sshd[6027]: Connection closed by 10.200.16.10 port 37038 Jan 21 23:41:20.285977 sshd-session[6010]: pam_unix(sshd:session): session closed for user core Jan 21 23:41:20.286000 audit[6010]: USER_END pid=6010 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:20.290130 systemd[1]: sshd@23-10.200.20.29:22-10.200.16.10:37038.service: Deactivated successfully. Jan 21 23:41:20.292721 systemd[1]: session-26.scope: Deactivated successfully. Jan 21 23:41:20.286000 audit[6010]: CRED_DISP pid=6010 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:20.309823 systemd-logind[2082]: Session 26 logged out. Waiting for processes to exit. Jan 21 23:41:20.310912 systemd-logind[2082]: Removed session 26. Jan 21 23:41:20.320451 kernel: audit: type=1106 audit(1769038880.286:913): pid=6010 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:20.320551 kernel: audit: type=1104 audit(1769038880.286:914): pid=6010 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:20.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.29:22-10.200.16.10:37038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:41:22.694575 kubelet[3545]: E0121 23:41:22.694395 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wsfxd" podUID="40afe29b-91ee-4738-9c1a-8ee8dd5c9d9f" Jan 21 23:41:23.692113 kubelet[3545]: E0121 23:41:23.692031 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d9759fcd4-9gjqv" podUID="fd7f2311-8da9-446b-ab8a-7da03038d65b" Jan 21 23:41:24.691421 containerd[2113]: time="2026-01-21T23:41:24.690943960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 23:41:24.934675 containerd[2113]: time="2026-01-21T23:41:24.934625390Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:41:24.937423 containerd[2113]: time="2026-01-21T23:41:24.937382568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 23:41:24.937514 containerd[2113]: time="2026-01-21T23:41:24.937472403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 23:41:24.937712 kubelet[3545]: E0121 23:41:24.937668 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 23:41:24.938007 kubelet[3545]: E0121 23:41:24.937719 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 23:41:24.938007 kubelet[3545]: E0121 23:41:24.937824 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxhrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-84459bb977-fzglc_calico-system(b1e000bd-2ebe-4f78-af48-a2456035e42f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 23:41:24.939373 kubelet[3545]: E0121 23:41:24.939307 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84459bb977-fzglc" podUID="b1e000bd-2ebe-4f78-af48-a2456035e42f" Jan 21 23:41:25.377454 systemd[1]: Started sshd@24-10.200.20.29:22-10.200.16.10:37046.service - OpenSSH per-connection server daemon (10.200.16.10:37046). Jan 21 23:41:25.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.29:22-10.200.16.10:37046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:41:25.381245 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 23:41:25.381286 kernel: audit: type=1130 audit(1769038885.376:916): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.29:22-10.200.16.10:37046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:41:25.691028 containerd[2113]: time="2026-01-21T23:41:25.690706714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 23:41:25.815077 sshd[6063]: Accepted publickey for core from 10.200.16.10 port 37046 ssh2: RSA SHA256:5TSQqK9LgDcDF4tXOpTwbPSMv8XD1lnuHq7SoUWOfFs Jan 21 23:41:25.813000 audit[6063]: USER_ACCT pid=6063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:25.832501 sshd-session[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:41:25.830000 audit[6063]: CRED_ACQ pid=6063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:25.850603 kernel: audit: type=1101 audit(1769038885.813:917): pid=6063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:25.850690 kernel: audit: type=1103 audit(1769038885.830:918): pid=6063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:25.861058 kernel: audit: type=1006 audit(1769038885.830:919): pid=6063 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 21 23:41:25.830000 audit[6063]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffca367f0 a2=3 a3=0 items=0 ppid=1 pid=6063 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:41:25.862910 systemd-logind[2082]: New session 27 of user core. Jan 21 23:41:25.830000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:41:25.884081 kernel: audit: type=1300 audit(1769038885.830:919): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffca367f0 a2=3 a3=0 items=0 ppid=1 pid=6063 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:41:25.894304 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 21 23:41:25.900079 kernel: audit: type=1327 audit(1769038885.830:919): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:41:25.899000 audit[6063]: USER_START pid=6063 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:25.922000 audit[6066]: CRED_ACQ pid=6066 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:25.943893 kernel: audit: type=1105 audit(1769038885.899:920): pid=6063 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:25.944007 kernel: audit: type=1103 audit(1769038885.922:921): pid=6066 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:25.956522 containerd[2113]: time="2026-01-21T23:41:25.956440982Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:41:25.959360 containerd[2113]: time="2026-01-21T23:41:25.959314899Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 23:41:25.959476 containerd[2113]: time="2026-01-21T23:41:25.959405439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 23:41:25.959636 kubelet[3545]: E0121 23:41:25.959598 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:41:25.959939 kubelet[3545]: E0121 23:41:25.959648 3545 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:41:25.959939 kubelet[3545]: E0121 23:41:25.959782 3545 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4r6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86f4fc7866-w9csf_calico-apiserver(67e69fef-e284-4866-bf16-ca5d0645fcac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 23:41:25.961238 kubelet[3545]: E0121 23:41:25.961205 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-w9csf" podUID="67e69fef-e284-4866-bf16-ca5d0645fcac" Jan 21 23:41:26.132176 sshd[6066]: Connection closed by 10.200.16.10 port 37046 Jan 21 23:41:26.132900 sshd-session[6063]: pam_unix(sshd:session): session closed for user core Jan 21 23:41:26.132000 audit[6063]: USER_END pid=6063 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:26.157529 systemd[1]: sshd@24-10.200.20.29:22-10.200.16.10:37046.service: Deactivated successfully. Jan 21 23:41:26.159435 systemd[1]: session-27.scope: Deactivated successfully. Jan 21 23:41:26.162248 systemd-logind[2082]: Session 27 logged out. Waiting for processes to exit. Jan 21 23:41:26.164487 systemd-logind[2082]: Removed session 27. Jan 21 23:41:26.133000 audit[6063]: CRED_DISP pid=6063 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:26.184081 kernel: audit: type=1106 audit(1769038886.132:922): pid=6063 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:26.184185 kernel: audit: type=1104 audit(1769038886.133:923): pid=6063 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 23:41:26.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.29:22-10.200.16.10:37046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:41:26.692260 kubelet[3545]: E0121 23:41:26.691908 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fc7866-d746f" podUID="fe4222fd-b3e4-4022-8b44-793668b7e61d"