Jan 14 00:05:57.625707 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jan 14 00:05:57.625724 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Jan 13 22:00:26 -00 2026 Jan 14 00:05:57.625731 kernel: KASLR enabled Jan 14 00:05:57.625735 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 14 00:05:57.625740 kernel: printk: legacy bootconsole [pl11] enabled Jan 14 00:05:57.625744 kernel: efi: EFI v2.7 by EDK II Jan 14 00:05:57.625750 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e3f9018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Jan 14 00:05:57.625754 kernel: random: crng init done Jan 14 00:05:57.625758 kernel: secureboot: Secure boot disabled Jan 14 00:05:57.625762 kernel: ACPI: Early table checksum verification disabled Jan 14 00:05:57.625766 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Jan 14 00:05:57.625770 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 00:05:57.625774 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 00:05:57.625780 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 14 00:05:57.625785 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 00:05:57.625790 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 00:05:57.625794 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 00:05:57.625799 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 00:05:57.625804 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 00:05:57.625808 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 00:05:57.625813 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 14 00:05:57.625817 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 00:05:57.625822 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 14 00:05:57.625826 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 14 00:05:57.625830 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 00:05:57.625835 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jan 14 00:05:57.625839 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jan 14 00:05:57.625845 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 00:05:57.625849 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 00:05:57.625853 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 00:05:57.625858 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 00:05:57.625862 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 00:05:57.625867 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 00:05:57.625871 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 00:05:57.625875 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 00:05:57.625880 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 00:05:57.625884 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jan 14 00:05:57.625889 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Jan 14 00:05:57.625894 kernel: Zone ranges: Jan 14 00:05:57.625898 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 14 00:05:57.625905 kernel: DMA32 empty Jan 14 00:05:57.625909 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 14 00:05:57.625914 kernel: Device empty Jan 14 00:05:57.625920 kernel: Movable zone start for each node Jan 14 00:05:57.625925 kernel: Early memory node ranges Jan 14 00:05:57.625929 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 14 00:05:57.625934 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Jan 14 00:05:57.625939 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Jan 14 00:05:57.625943 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Jan 14 00:05:57.625948 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Jan 14 00:05:57.625953 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Jan 14 00:05:57.625957 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 14 00:05:57.625963 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 14 00:05:57.625968 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 14 00:05:57.625972 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Jan 14 00:05:57.625977 kernel: psci: probing for conduit method from ACPI. Jan 14 00:05:57.625982 kernel: psci: PSCIv1.3 detected in firmware. Jan 14 00:05:57.625986 kernel: psci: Using standard PSCI v0.2 function IDs Jan 14 00:05:57.626015 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 14 00:05:57.626020 kernel: psci: SMC Calling Convention v1.4 Jan 14 00:05:57.626025 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 14 00:05:57.626029 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 14 00:05:57.626034 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 14 00:05:57.626039 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 14 00:05:57.626045 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 14 00:05:57.626050 kernel: Detected PIPT I-cache on CPU0 Jan 14 00:05:57.626055 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jan 14 00:05:57.626060 kernel: CPU features: detected: GIC system register CPU interface Jan 14 00:05:57.626064 kernel: CPU features: detected: Spectre-v4 Jan 14 00:05:57.626069 kernel: CPU features: detected: Spectre-BHB Jan 14 00:05:57.626074 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 14 00:05:57.626079 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 14 00:05:57.626083 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jan 14 00:05:57.626088 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 14 00:05:57.626093 kernel: alternatives: applying boot alternatives Jan 14 00:05:57.626099 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=3d3f73de8d2693594dfefd279d2c8d77c282a05a4cbc54177503d31784261f6b Jan 14 00:05:57.626104 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 00:05:57.626109 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 00:05:57.626114 kernel: Fallback order for Node 0: 0 Jan 14 00:05:57.626118 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jan 14 00:05:57.626123 kernel: Policy zone: Normal Jan 14 00:05:57.626128 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 00:05:57.626132 kernel: software IO TLB: area num 2. Jan 14 00:05:57.626137 kernel: software IO TLB: mapped [mem 0x0000000037370000-0x000000003b370000] (64MB) Jan 14 00:05:57.626142 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 00:05:57.626147 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 00:05:57.626153 kernel: rcu: RCU event tracing is enabled. Jan 14 00:05:57.626158 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 00:05:57.626163 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 00:05:57.626167 kernel: Tracing variant of Tasks RCU enabled. Jan 14 00:05:57.626172 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 00:05:57.626177 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 00:05:57.626182 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 00:05:57.626187 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 00:05:57.626191 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 14 00:05:57.626196 kernel: GICv3: 960 SPIs implemented Jan 14 00:05:57.626202 kernel: GICv3: 0 Extended SPIs implemented Jan 14 00:05:57.626206 kernel: Root IRQ handler: gic_handle_irq Jan 14 00:05:57.626211 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 14 00:05:57.626216 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jan 14 00:05:57.626220 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 14 00:05:57.626225 kernel: ITS: No ITS available, not enabling LPIs Jan 14 00:05:57.626230 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 00:05:57.626235 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jan 14 00:05:57.626239 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 14 00:05:57.626244 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jan 14 00:05:57.626249 kernel: Console: colour dummy device 80x25 Jan 14 00:05:57.626255 kernel: printk: legacy console [tty1] enabled Jan 14 00:05:57.626260 kernel: ACPI: Core revision 20240827 Jan 14 00:05:57.626265 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jan 14 00:05:57.626270 kernel: pid_max: default: 32768 minimum: 301 Jan 14 00:05:57.626275 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 00:05:57.626280 kernel: landlock: Up and running. Jan 14 00:05:57.626285 kernel: SELinux: Initializing. Jan 14 00:05:57.626291 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 00:05:57.626296 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 00:05:57.626301 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Jan 14 00:05:57.626306 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Jan 14 00:05:57.626315 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 00:05:57.626321 kernel: rcu: Hierarchical SRCU implementation. Jan 14 00:05:57.626326 kernel: rcu: Max phase no-delay instances is 400. Jan 14 00:05:57.626331 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 00:05:57.626336 kernel: Remapping and enabling EFI services. Jan 14 00:05:57.626342 kernel: smp: Bringing up secondary CPUs ... Jan 14 00:05:57.626347 kernel: Detected PIPT I-cache on CPU1 Jan 14 00:05:57.626352 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 14 00:05:57.626358 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jan 14 00:05:57.626363 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 00:05:57.626369 kernel: SMP: Total of 2 processors activated. Jan 14 00:05:57.626374 kernel: CPU: All CPU(s) started at EL1 Jan 14 00:05:57.626379 kernel: CPU features: detected: 32-bit EL0 Support Jan 14 00:05:57.626384 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 14 00:05:57.626389 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 14 00:05:57.626395 kernel: CPU features: detected: Common not Private translations Jan 14 00:05:57.626401 kernel: CPU features: detected: CRC32 instructions Jan 14 00:05:57.626406 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jan 14 00:05:57.626411 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 14 00:05:57.626416 kernel: CPU features: detected: LSE atomic instructions Jan 14 00:05:57.626422 kernel: CPU features: detected: Privileged Access Never Jan 14 00:05:57.626427 kernel: CPU features: detected: Speculation barrier (SB) Jan 14 00:05:57.626432 kernel: CPU features: detected: TLB range maintenance instructions Jan 14 00:05:57.626438 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 14 00:05:57.626443 kernel: CPU features: detected: Scalable Vector Extension Jan 14 00:05:57.626448 kernel: alternatives: applying system-wide alternatives Jan 14 00:05:57.626454 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 14 00:05:57.626459 kernel: SVE: maximum available vector length 16 bytes per vector Jan 14 00:05:57.626464 kernel: SVE: default vector length 16 bytes per vector Jan 14 00:05:57.626469 kernel: Memory: 3979900K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 12480K init, 1038K bss, 193072K reserved, 16384K cma-reserved) Jan 14 00:05:57.626476 kernel: devtmpfs: initialized Jan 14 00:05:57.626481 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 00:05:57.626486 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 00:05:57.626491 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 14 00:05:57.626497 kernel: 0 pages in range for non-PLT usage Jan 14 00:05:57.626502 kernel: 515168 pages in range for PLT usage Jan 14 00:05:57.626507 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 00:05:57.626513 kernel: SMBIOS 3.1.0 present. Jan 14 00:05:57.626518 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Jan 14 00:05:57.626523 kernel: DMI: Memory slots populated: 2/2 Jan 14 00:05:57.626528 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 00:05:57.626534 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 14 00:05:57.626539 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 14 00:05:57.626544 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 14 00:05:57.626549 kernel: audit: initializing netlink subsys (disabled) Jan 14 00:05:57.626555 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jan 14 00:05:57.626561 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 00:05:57.626566 kernel: cpuidle: using governor menu Jan 14 00:05:57.626571 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 14 00:05:57.626576 kernel: ASID allocator initialised with 32768 entries Jan 14 00:05:57.626581 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 00:05:57.626587 kernel: Serial: AMBA PL011 UART driver Jan 14 00:05:57.626593 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 00:05:57.626598 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 00:05:57.626603 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 14 00:05:57.626608 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 14 00:05:57.626614 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 00:05:57.626619 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 00:05:57.626624 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 14 00:05:57.626630 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 14 00:05:57.626635 kernel: ACPI: Added _OSI(Module Device) Jan 14 00:05:57.626640 kernel: ACPI: Added _OSI(Processor Device) Jan 14 00:05:57.626645 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 00:05:57.626651 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 00:05:57.626656 kernel: ACPI: Interpreter enabled Jan 14 00:05:57.626661 kernel: ACPI: Using GIC for interrupt routing Jan 14 00:05:57.626667 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 14 00:05:57.626672 kernel: printk: legacy console [ttyAMA0] enabled Jan 14 00:05:57.626678 kernel: printk: legacy bootconsole [pl11] disabled Jan 14 00:05:57.626683 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 14 00:05:57.626688 kernel: ACPI: CPU0 has been hot-added Jan 14 00:05:57.626693 kernel: ACPI: CPU1 has been hot-added Jan 14 00:05:57.626698 kernel: iommu: Default domain type: Translated Jan 14 00:05:57.626704 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 14 00:05:57.626710 kernel: efivars: Registered efivars operations Jan 14 00:05:57.626715 kernel: vgaarb: loaded Jan 14 00:05:57.626720 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 14 00:05:57.626725 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 00:05:57.626730 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 00:05:57.626735 kernel: pnp: PnP ACPI init Jan 14 00:05:57.626741 kernel: pnp: PnP ACPI: found 0 devices Jan 14 00:05:57.626746 kernel: NET: Registered PF_INET protocol family Jan 14 00:05:57.626751 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 00:05:57.626757 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 14 00:05:57.626762 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 00:05:57.626767 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 00:05:57.626772 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 14 00:05:57.626778 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 14 00:05:57.626784 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 00:05:57.626789 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 00:05:57.626794 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 00:05:57.626799 kernel: PCI: CLS 0 bytes, default 64 Jan 14 00:05:57.626804 kernel: kvm [1]: HYP mode not available Jan 14 00:05:57.626810 kernel: Initialise system trusted keyrings Jan 14 00:05:57.626815 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 14 00:05:57.626821 kernel: Key type asymmetric registered Jan 14 00:05:57.626826 kernel: Asymmetric key parser 'x509' registered Jan 14 00:05:57.626831 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 14 00:05:57.626836 kernel: io scheduler mq-deadline registered Jan 14 00:05:57.626842 kernel: io scheduler kyber registered Jan 14 00:05:57.626847 kernel: io scheduler bfq registered Jan 14 00:05:57.626852 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 00:05:57.626858 kernel: thunder_xcv, ver 1.0 Jan 14 00:05:57.626863 kernel: thunder_bgx, ver 1.0 Jan 14 00:05:57.626868 kernel: nicpf, ver 1.0 Jan 14 00:05:57.626873 kernel: nicvf, ver 1.0 Jan 14 00:05:57.627004 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 14 00:05:57.627074 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-14T00:05:55 UTC (1768349155) Jan 14 00:05:57.627083 kernel: efifb: probing for efifb Jan 14 00:05:57.627088 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 00:05:57.627093 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 00:05:57.627099 kernel: efifb: scrolling: redraw Jan 14 00:05:57.627104 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 00:05:57.627109 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 00:05:57.627114 kernel: fb0: EFI VGA frame buffer device Jan 14 00:05:57.627120 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 14 00:05:57.627126 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 00:05:57.627131 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 14 00:05:57.627136 kernel: watchdog: NMI not fully supported Jan 14 00:05:57.627141 kernel: watchdog: Hard watchdog permanently disabled Jan 14 00:05:57.627146 kernel: NET: Registered PF_INET6 protocol family Jan 14 00:05:57.627152 kernel: Segment Routing with IPv6 Jan 14 00:05:57.627158 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 00:05:57.627163 kernel: NET: Registered PF_PACKET protocol family Jan 14 00:05:57.627168 kernel: Key type dns_resolver registered Jan 14 00:05:57.627173 kernel: registered taskstats version 1 Jan 14 00:05:57.627178 kernel: Loading compiled-in X.509 certificates Jan 14 00:05:57.627184 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: d16d100cda59d8093883df975a5384fda36b7d35' Jan 14 00:05:57.627189 kernel: Demotion targets for Node 0: null Jan 14 00:05:57.627195 kernel: Key type .fscrypt registered Jan 14 00:05:57.627200 kernel: Key type fscrypt-provisioning registered Jan 14 00:05:57.627205 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 00:05:57.627210 kernel: ima: Allocated hash algorithm: sha1 Jan 14 00:05:57.627215 kernel: ima: No architecture policies found Jan 14 00:05:57.627220 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 14 00:05:57.627225 kernel: clk: Disabling unused clocks Jan 14 00:05:57.627231 kernel: PM: genpd: Disabling unused power domains Jan 14 00:05:57.627237 kernel: Freeing unused kernel memory: 12480K Jan 14 00:05:57.627242 kernel: Run /init as init process Jan 14 00:05:57.627247 kernel: with arguments: Jan 14 00:05:57.627252 kernel: /init Jan 14 00:05:57.627257 kernel: with environment: Jan 14 00:05:57.627262 kernel: HOME=/ Jan 14 00:05:57.627267 kernel: TERM=linux Jan 14 00:05:57.627273 kernel: hv_vmbus: Vmbus version:5.3 Jan 14 00:05:57.627278 kernel: SCSI subsystem initialized Jan 14 00:05:57.627284 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 00:05:57.627289 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 14 00:05:57.627370 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 00:05:57.627377 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 00:05:57.627384 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 14 00:05:57.627389 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 00:05:57.627395 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 00:05:57.627400 kernel: PTP clock support registered Jan 14 00:05:57.627405 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 00:05:57.627410 kernel: hv_vmbus: registering driver hv_utils Jan 14 00:05:57.627415 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 00:05:57.627421 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 00:05:57.627427 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 00:05:57.627432 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 00:05:57.627523 kernel: scsi host1: storvsc_host_t Jan 14 00:05:57.627601 kernel: scsi host0: storvsc_host_t Jan 14 00:05:57.627686 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 00:05:57.627768 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 14 00:05:57.627842 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 00:05:57.627916 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jan 14 00:05:57.627988 kernel: sd 1:0:0:0: [sda] Write Protect is off Jan 14 00:05:57.629550 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 00:05:57.629633 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 00:05:57.629737 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#60 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 14 00:05:57.629808 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 14 00:05:57.629815 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 00:05:57.629890 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jan 14 00:05:57.629969 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Jan 14 00:05:57.629978 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 00:05:57.629984 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 00:05:57.629989 kernel: device-mapper: uevent: version 1.0.3 Jan 14 00:05:57.630005 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 00:05:57.630082 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Jan 14 00:05:57.630088 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 14 00:05:57.630094 kernel: raid6: neonx8 gen() 18530 MB/s Jan 14 00:05:57.630100 kernel: raid6: neonx4 gen() 18561 MB/s Jan 14 00:05:57.630106 kernel: raid6: neonx2 gen() 17113 MB/s Jan 14 00:05:57.630111 kernel: raid6: neonx1 gen() 15051 MB/s Jan 14 00:05:57.630116 kernel: raid6: int64x8 gen() 10524 MB/s Jan 14 00:05:57.630121 kernel: raid6: int64x4 gen() 10620 MB/s Jan 14 00:05:57.630127 kernel: raid6: int64x2 gen() 8980 MB/s Jan 14 00:05:57.630132 kernel: raid6: int64x1 gen() 7051 MB/s Jan 14 00:05:57.630137 kernel: raid6: using algorithm neonx4 gen() 18561 MB/s Jan 14 00:05:57.630143 kernel: raid6: .... xor() 15134 MB/s, rmw enabled Jan 14 00:05:57.630148 kernel: raid6: using neon recovery algorithm Jan 14 00:05:57.630154 kernel: xor: measuring software checksum speed Jan 14 00:05:57.630159 kernel: 8regs : 28609 MB/sec Jan 14 00:05:57.630164 kernel: 32regs : 28236 MB/sec Jan 14 00:05:57.630169 kernel: arm64_neon : 37477 MB/sec Jan 14 00:05:57.630175 kernel: xor: using function: arm64_neon (37477 MB/sec) Jan 14 00:05:57.630181 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 00:05:57.630186 kernel: BTRFS: device fsid 68b1ce8e-a637-4e91-acf8-5a2e05e289e5 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (344) Jan 14 00:05:57.630192 kernel: BTRFS info (device dm-0): first mount of filesystem 68b1ce8e-a637-4e91-acf8-5a2e05e289e5 Jan 14 00:05:57.630197 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 14 00:05:57.630203 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 00:05:57.630208 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 00:05:57.630213 kernel: loop: module loaded Jan 14 00:05:57.630219 kernel: loop0: detected capacity change from 0 to 91832 Jan 14 00:05:57.630224 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 00:05:57.630230 systemd[1]: Successfully made /usr/ read-only. Jan 14 00:05:57.630238 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 00:05:57.630244 systemd[1]: Detected virtualization microsoft. Jan 14 00:05:57.630249 systemd[1]: Detected architecture arm64. Jan 14 00:05:57.630256 systemd[1]: Running in initrd. Jan 14 00:05:57.630262 systemd[1]: No hostname configured, using default hostname. Jan 14 00:05:57.630268 systemd[1]: Hostname set to . Jan 14 00:05:57.630273 systemd[1]: Initializing machine ID from random generator. Jan 14 00:05:57.630279 systemd[1]: Queued start job for default target initrd.target. Jan 14 00:05:57.630284 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 00:05:57.630291 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 00:05:57.630297 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 00:05:57.630303 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 00:05:57.630309 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 00:05:57.630315 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 00:05:57.630321 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 00:05:57.630328 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 00:05:57.630333 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 00:05:57.630339 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 00:05:57.630345 systemd[1]: Reached target paths.target - Path Units. Jan 14 00:05:57.630350 systemd[1]: Reached target slices.target - Slice Units. Jan 14 00:05:57.630356 systemd[1]: Reached target swap.target - Swaps. Jan 14 00:05:57.630362 systemd[1]: Reached target timers.target - Timer Units. Jan 14 00:05:57.630368 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 00:05:57.630374 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 00:05:57.630380 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 00:05:57.630385 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 00:05:57.630391 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 00:05:57.630397 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 00:05:57.630407 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 00:05:57.630414 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 00:05:57.630420 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 00:05:57.630426 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 00:05:57.630431 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 00:05:57.630438 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 00:05:57.630444 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 00:05:57.630450 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 00:05:57.630456 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 00:05:57.630462 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 00:05:57.630467 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 00:05:57.630474 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:05:57.630493 systemd-journald[482]: Collecting audit messages is enabled. Jan 14 00:05:57.630509 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 00:05:57.630515 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 00:05:57.630522 systemd-journald[482]: Journal started Jan 14 00:05:57.630535 systemd-journald[482]: Runtime Journal (/run/log/journal/a1d7390a0d5f492c95f7e55c8804291e) is 8M, max 78.3M, 70.3M free. Jan 14 00:05:57.639005 kernel: audit: type=1130 audit(1768349157.628:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:57.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:57.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:57.673899 kernel: audit: type=1130 audit(1768349157.653:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:57.673938 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 00:05:57.673950 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 00:05:57.693286 kernel: audit: type=1130 audit(1768349157.680:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:57.693318 kernel: Bridge firewalling registered Jan 14 00:05:57.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:57.693765 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 00:05:57.695890 systemd-modules-load[484]: Inserted module 'br_netfilter' Jan 14 00:05:57.724007 kernel: audit: type=1130 audit(1768349157.704:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:57.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:57.705552 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 00:05:57.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:57.740207 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:05:57.767981 kernel: audit: type=1130 audit(1768349157.727:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:57.768007 kernel: audit: type=1130 audit(1768349157.744:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:57.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:57.758978 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 00:05:57.780527 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 00:05:57.790116 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 00:05:57.799428 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 00:05:57.819764 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 00:05:57.845605 kernel: audit: type=1130 audit(1768349157.824:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:57.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:57.842739 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 00:05:57.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:57.843107 systemd-tmpfiles[502]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 00:05:57.876025 kernel: audit: type=1130 audit(1768349157.850:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:57.868069 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 00:05:57.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:57.881338 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 00:05:57.912863 kernel: audit: type=1130 audit(1768349157.880:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:57.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:57.906085 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 00:05:57.923000 audit: BPF prog-id=6 op=LOAD Jan 14 00:05:57.924860 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 00:05:57.936122 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 00:05:57.954596 dracut-cmdline[514]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=3d3f73de8d2693594dfefd279d2c8d77c282a05a4cbc54177503d31784261f6b Jan 14 00:05:57.998509 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 00:05:58.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:58.068022 systemd-resolved[516]: Positive Trust Anchors: Jan 14 00:05:58.068034 systemd-resolved[516]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 00:05:58.068036 systemd-resolved[516]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 00:05:58.068055 systemd-resolved[516]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 00:05:58.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:58.088133 systemd-resolved[516]: Defaulting to hostname 'linux'. Jan 14 00:05:58.093708 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 00:05:58.098843 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 00:05:58.217013 kernel: Loading iSCSI transport class v2.0-870. Jan 14 00:05:58.234020 kernel: iscsi: registered transport (tcp) Jan 14 00:05:58.251298 kernel: iscsi: registered transport (qla4xxx) Jan 14 00:05:58.251313 kernel: QLogic iSCSI HBA Driver Jan 14 00:05:58.277379 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 00:05:58.292734 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 00:05:58.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:58.304432 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 00:05:58.345080 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 00:05:58.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:58.354248 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 00:05:58.358539 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 00:05:58.396241 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 00:05:58.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:58.403000 audit: BPF prog-id=7 op=LOAD Jan 14 00:05:58.403000 audit: BPF prog-id=8 op=LOAD Jan 14 00:05:58.405199 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 00:05:58.449933 systemd-udevd[762]: Using default interface naming scheme 'v257'. Jan 14 00:05:58.460114 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 00:05:58.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:58.475619 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 00:05:58.495035 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 00:05:58.505066 dracut-pre-trigger[863]: rd.md=0: removing MD RAID activation Jan 14 00:05:58.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:58.509000 audit: BPF prog-id=9 op=LOAD Jan 14 00:05:58.510501 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 00:05:58.541957 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 00:05:58.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:58.553950 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 00:05:58.567826 systemd-networkd[875]: lo: Link UP Jan 14 00:05:58.567832 systemd-networkd[875]: lo: Gained carrier Jan 14 00:05:58.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:58.570553 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 00:05:58.577139 systemd[1]: Reached target network.target - Network. Jan 14 00:05:58.620624 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 00:05:58.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:58.633708 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 00:05:58.700024 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#22 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 14 00:05:58.729033 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 00:05:58.742539 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 00:05:58.747296 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:05:58.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:58.753287 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:05:58.765652 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:05:58.797302 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:05:58.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:58.813009 kernel: hv_netvsc 7ced8dd0-5241-7ced-8dd0-52417ced8dd0 eth0: VF slot 1 added Jan 14 00:05:58.821498 systemd-networkd[875]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:05:58.830581 systemd-networkd[875]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 00:05:58.847520 kernel: hv_vmbus: registering driver hv_pci Jan 14 00:05:58.847537 kernel: hv_pci 0ad0ad5b-643d-4055-8513-07038ac3289e: PCI VMBus probing: Using version 0x10004 Jan 14 00:05:58.835605 systemd-networkd[875]: eth0: Link UP Jan 14 00:05:58.835728 systemd-networkd[875]: eth0: Gained carrier Jan 14 00:05:58.835739 systemd-networkd[875]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:05:58.873045 systemd-networkd[875]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 14 00:05:58.895784 kernel: hv_pci 0ad0ad5b-643d-4055-8513-07038ac3289e: PCI host bridge to bus 643d:00 Jan 14 00:05:58.895967 kernel: pci_bus 643d:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 14 00:05:58.897333 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 00:05:58.911622 kernel: pci_bus 643d:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 00:05:58.918503 kernel: pci 643d:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jan 14 00:05:58.918677 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 00:05:58.936656 kernel: pci 643d:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 14 00:05:58.936734 kernel: pci 643d:00:02.0: enabling Extended Tags Jan 14 00:05:58.933209 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 00:05:58.969427 kernel: pci 643d:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 643d:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jan 14 00:05:58.969590 kernel: pci_bus 643d:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 00:05:58.969674 kernel: pci 643d:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jan 14 00:05:58.959550 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 00:05:58.983500 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 00:05:59.103770 kernel: mlx5_core 643d:00:02.0: enabling device (0000 -> 0002) Jan 14 00:05:59.113951 kernel: mlx5_core 643d:00:02.0: PTM is not supported by PCIe Jan 14 00:05:59.114143 kernel: mlx5_core 643d:00:02.0: firmware version: 16.30.5026 Jan 14 00:05:59.291598 kernel: hv_netvsc 7ced8dd0-5241-7ced-8dd0-52417ced8dd0 eth0: VF registering: eth1 Jan 14 00:05:59.291840 kernel: mlx5_core 643d:00:02.0 eth1: joined to eth0 Jan 14 00:05:59.299028 kernel: mlx5_core 643d:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 14 00:05:59.309176 systemd-networkd[875]: eth1: Interface name change detected, renamed to enP25661s1. Jan 14 00:05:59.314710 kernel: mlx5_core 643d:00:02.0 enP25661s1: renamed from eth1 Jan 14 00:05:59.327163 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 00:05:59.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:59.337987 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 00:05:59.350103 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 00:05:59.355854 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 00:05:59.366307 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 00:05:59.392634 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 00:05:59.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:59.446014 kernel: mlx5_core 643d:00:02.0 enP25661s1: Link up Jan 14 00:05:59.480006 kernel: hv_netvsc 7ced8dd0-5241-7ced-8dd0-52417ced8dd0 eth0: Data path switched to VF: enP25661s1 Jan 14 00:05:59.480230 systemd-networkd[875]: enP25661s1: Link UP Jan 14 00:05:59.842264 systemd-networkd[875]: enP25661s1: Gained carrier Jan 14 00:06:00.083620 disk-uuid[1001]: Warning: The kernel is still using the old partition table. Jan 14 00:06:00.083620 disk-uuid[1001]: The new table will be used at the next reboot or after you Jan 14 00:06:00.083620 disk-uuid[1001]: run partprobe(8) or kpartx(8) Jan 14 00:06:00.083620 disk-uuid[1001]: The operation has completed successfully. Jan 14 00:06:00.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:00.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:00.092826 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 00:06:00.092958 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 00:06:00.099869 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 00:06:00.148891 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1134) Jan 14 00:06:00.148927 kernel: BTRFS info (device sda6): first mount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 14 00:06:00.153957 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 00:06:00.167208 kernel: BTRFS info (device sda6): turning on async discard Jan 14 00:06:00.167249 kernel: BTRFS info (device sda6): enabling free space tree Jan 14 00:06:00.177005 kernel: BTRFS info (device sda6): last unmount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 14 00:06:00.177386 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 00:06:00.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:00.183205 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 00:06:00.398563 ignition[1153]: Ignition 2.24.0 Jan 14 00:06:00.401252 ignition[1153]: Stage: fetch-offline Jan 14 00:06:00.401749 ignition[1153]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:06:00.406314 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 00:06:00.401759 ignition[1153]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 00:06:00.425775 kernel: kauditd_printk_skb: 22 callbacks suppressed Jan 14 00:06:00.425794 kernel: audit: type=1130 audit(1768349160.418:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:00.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:00.401835 ignition[1153]: parsed url from cmdline: "" Jan 14 00:06:00.401838 ignition[1153]: no config URL provided Jan 14 00:06:00.401896 ignition[1153]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 00:06:00.426872 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 00:06:00.401903 ignition[1153]: no config at "/usr/lib/ignition/user.ign" Jan 14 00:06:00.401906 ignition[1153]: failed to fetch config: resource requires networking Jan 14 00:06:00.402138 ignition[1153]: Ignition finished successfully Jan 14 00:06:00.459165 ignition[1160]: Ignition 2.24.0 Jan 14 00:06:00.459171 ignition[1160]: Stage: fetch Jan 14 00:06:00.459359 ignition[1160]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:06:00.459366 ignition[1160]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 00:06:00.459439 ignition[1160]: parsed url from cmdline: "" Jan 14 00:06:00.459442 ignition[1160]: no config URL provided Jan 14 00:06:00.459445 ignition[1160]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 00:06:00.459449 ignition[1160]: no config at "/usr/lib/ignition/user.ign" Jan 14 00:06:00.459463 ignition[1160]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 00:06:00.521989 ignition[1160]: GET result: OK Jan 14 00:06:00.522073 ignition[1160]: config has been read from IMDS userdata Jan 14 00:06:00.522086 ignition[1160]: parsing config with SHA512: 065981590814135550a6e80ff22069bf77bdd29661ee0210fcd657b12cbc963335da58f6cec05bd425c4c242074908f48f7a9bdc15883e3b37dac52038a79d21 Jan 14 00:06:00.526621 unknown[1160]: fetched base config from "system" Jan 14 00:06:00.526864 ignition[1160]: fetch: fetch complete Jan 14 00:06:00.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:00.526627 unknown[1160]: fetched base config from "system" Jan 14 00:06:00.557341 kernel: audit: type=1130 audit(1768349160.536:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:00.526868 ignition[1160]: fetch: fetch passed Jan 14 00:06:00.526630 unknown[1160]: fetched user config from "azure" Jan 14 00:06:00.526906 ignition[1160]: Ignition finished successfully Jan 14 00:06:00.529435 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 00:06:00.534040 systemd-networkd[875]: eth0: Gained IPv6LL Jan 14 00:06:00.538453 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 00:06:00.575941 ignition[1166]: Ignition 2.24.0 Jan 14 00:06:00.575946 ignition[1166]: Stage: kargs Jan 14 00:06:00.580755 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 00:06:00.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:00.576178 ignition[1166]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:06:00.609632 kernel: audit: type=1130 audit(1768349160.587:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:00.588448 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 00:06:00.576185 ignition[1166]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 00:06:00.576864 ignition[1166]: kargs: kargs passed Jan 14 00:06:00.576912 ignition[1166]: Ignition finished successfully Jan 14 00:06:00.624036 ignition[1172]: Ignition 2.24.0 Jan 14 00:06:00.626784 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 00:06:00.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:00.624041 ignition[1172]: Stage: disks Jan 14 00:06:00.654913 kernel: audit: type=1130 audit(1768349160.632:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:00.624281 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:06:00.647231 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 00:06:00.624288 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 00:06:00.652762 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 00:06:00.624927 ignition[1172]: disks: disks passed Jan 14 00:06:00.660162 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 00:06:00.624965 ignition[1172]: Ignition finished successfully Jan 14 00:06:00.669171 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 00:06:00.678117 systemd[1]: Reached target basic.target - Basic System. Jan 14 00:06:00.687203 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 00:06:00.748016 systemd-fsck[1180]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Jan 14 00:06:00.756954 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 00:06:00.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:00.763072 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 00:06:00.786786 kernel: audit: type=1130 audit(1768349160.761:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:00.886032 kernel: EXT4-fs (sda9): mounted filesystem db887ae3-d64c-46de-9f1e-de51a801ae44 r/w with ordered data mode. Quota mode: none. Jan 14 00:06:00.886483 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 00:06:00.890331 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 00:06:00.905482 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 00:06:00.914147 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 00:06:00.924845 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 00:06:00.935430 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 00:06:00.940074 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 00:06:00.955346 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 00:06:00.960426 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 00:06:00.979010 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1194) Jan 14 00:06:00.989487 kernel: BTRFS info (device sda6): first mount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 14 00:06:00.989522 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 00:06:00.999362 kernel: BTRFS info (device sda6): turning on async discard Jan 14 00:06:00.999400 kernel: BTRFS info (device sda6): enabling free space tree Jan 14 00:06:01.000737 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 00:06:01.078566 coreos-metadata[1196]: Jan 14 00:06:01.078 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 00:06:01.084625 coreos-metadata[1196]: Jan 14 00:06:01.084 INFO Fetch successful Jan 14 00:06:01.084625 coreos-metadata[1196]: Jan 14 00:06:01.084 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 00:06:01.097689 coreos-metadata[1196]: Jan 14 00:06:01.097 INFO Fetch successful Jan 14 00:06:01.102045 coreos-metadata[1196]: Jan 14 00:06:01.102 INFO wrote hostname ci-4547.0.0-n-16ff4e9fd7 to /sysroot/etc/hostname Jan 14 00:06:01.109606 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 00:06:01.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:01.133070 kernel: audit: type=1130 audit(1768349161.114:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:01.426506 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 00:06:01.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:01.437299 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 00:06:01.451680 kernel: audit: type=1130 audit(1768349161.435:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:01.464524 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 00:06:01.482204 kernel: BTRFS info (device sda6): last unmount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 14 00:06:01.472777 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 00:06:01.505462 ignition[1297]: INFO : Ignition 2.24.0 Jan 14 00:06:01.505462 ignition[1297]: INFO : Stage: mount Jan 14 00:06:01.512368 ignition[1297]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 00:06:01.512368 ignition[1297]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 00:06:01.512368 ignition[1297]: INFO : mount: mount passed Jan 14 00:06:01.512368 ignition[1297]: INFO : Ignition finished successfully Jan 14 00:06:01.561680 kernel: audit: type=1130 audit(1768349161.516:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:01.561704 kernel: audit: type=1130 audit(1768349161.539:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:01.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:01.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:01.508650 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 00:06:01.532854 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 00:06:01.542306 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 00:06:01.579047 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 00:06:01.606010 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1309) Jan 14 00:06:01.617549 kernel: BTRFS info (device sda6): first mount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 14 00:06:01.617592 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 00:06:01.627276 kernel: BTRFS info (device sda6): turning on async discard Jan 14 00:06:01.627312 kernel: BTRFS info (device sda6): enabling free space tree Jan 14 00:06:01.628585 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 00:06:01.654518 ignition[1326]: INFO : Ignition 2.24.0 Jan 14 00:06:01.654518 ignition[1326]: INFO : Stage: files Jan 14 00:06:01.661465 ignition[1326]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 00:06:01.661465 ignition[1326]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 00:06:01.661465 ignition[1326]: DEBUG : files: compiled without relabeling support, skipping Jan 14 00:06:01.661465 ignition[1326]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 00:06:01.661465 ignition[1326]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 00:06:01.688490 ignition[1326]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 00:06:01.688490 ignition[1326]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 00:06:01.688490 ignition[1326]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 00:06:01.681283 unknown[1326]: wrote ssh authorized keys file for user: core Jan 14 00:06:01.709652 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 14 00:06:01.709652 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 14 00:06:01.743109 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 00:06:01.944896 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 14 00:06:01.944896 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 00:06:01.959978 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 00:06:01.959978 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 00:06:01.959978 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 00:06:01.959978 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 00:06:01.959978 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 00:06:01.959978 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 00:06:01.959978 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 00:06:02.008299 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 00:06:02.008299 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 00:06:02.008299 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 14 00:06:02.008299 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 14 00:06:02.008299 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 14 00:06:02.008299 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 14 00:06:02.375308 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 00:06:02.623798 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 14 00:06:02.623798 ignition[1326]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 00:06:02.639036 ignition[1326]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 00:06:02.652758 ignition[1326]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 00:06:02.652758 ignition[1326]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 00:06:02.652758 ignition[1326]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 14 00:06:02.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:02.689000 ignition[1326]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 00:06:02.689000 ignition[1326]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 00:06:02.689000 ignition[1326]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 00:06:02.689000 ignition[1326]: INFO : files: files passed Jan 14 00:06:02.689000 ignition[1326]: INFO : Ignition finished successfully Jan 14 00:06:02.727691 kernel: audit: type=1130 audit(1768349162.671:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:02.662679 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 00:06:02.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:02.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:02.672688 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 00:06:02.717912 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 00:06:02.727521 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 00:06:02.727596 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 00:06:02.757839 initrd-setup-root-after-ignition[1357]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 00:06:02.757839 initrd-setup-root-after-ignition[1357]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 00:06:02.770861 initrd-setup-root-after-ignition[1361]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 00:06:02.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:02.765074 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 00:06:02.777008 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 00:06:02.788873 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 00:06:02.843826 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 00:06:02.843953 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 00:06:02.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:02.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:02.853729 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 00:06:02.862482 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 00:06:02.871682 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 00:06:02.872468 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 00:06:02.909639 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 00:06:02.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:02.922054 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 00:06:02.935855 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 00:06:02.938428 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 00:06:02.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:02.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:02.940025 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 00:06:02.946191 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 00:06:02.954354 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 00:06:02.989311 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 00:06:03.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:02.997921 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 00:06:02.998004 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 00:06:03.012115 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 00:06:03.016473 systemd[1]: Stopped target basic.target - Basic System. Jan 14 00:06:03.024805 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 00:06:03.033754 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 00:06:03.042998 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 00:06:03.051098 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 00:06:03.060430 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 00:06:03.069322 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 00:06:03.079681 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 00:06:03.088988 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 00:06:03.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.096881 systemd[1]: Stopped target swap.target - Swaps. Jan 14 00:06:03.105305 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 00:06:03.105381 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 00:06:03.117500 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 00:06:03.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.122123 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 00:06:03.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.130284 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 00:06:03.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.134545 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 00:06:03.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.140063 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 00:06:03.140121 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 00:06:03.152262 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 00:06:03.152296 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 00:06:03.157525 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 00:06:03.157551 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 00:06:03.166128 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 00:06:03.222833 ignition[1382]: INFO : Ignition 2.24.0 Jan 14 00:06:03.222833 ignition[1382]: INFO : Stage: umount Jan 14 00:06:03.222833 ignition[1382]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 00:06:03.222833 ignition[1382]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 00:06:03.222833 ignition[1382]: INFO : umount: umount passed Jan 14 00:06:03.222833 ignition[1382]: INFO : Ignition finished successfully Jan 14 00:06:03.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.166160 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 00:06:03.181106 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 00:06:03.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.205090 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 00:06:03.219063 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 00:06:03.219125 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 00:06:03.228329 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 00:06:03.228373 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 00:06:03.237331 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 00:06:03.237370 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 00:06:03.246706 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 00:06:03.246954 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 00:06:03.256584 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 00:06:03.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.256678 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 00:06:03.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.266453 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 00:06:03.266500 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 00:06:03.273803 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 00:06:03.273837 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 00:06:03.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.418000 audit: BPF prog-id=9 op=UNLOAD Jan 14 00:06:03.278416 systemd[1]: Stopped target network.target - Network. Jan 14 00:06:03.285940 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 00:06:03.285983 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 00:06:03.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.431000 audit: BPF prog-id=6 op=UNLOAD Jan 14 00:06:03.297238 systemd[1]: Stopped target paths.target - Path Units. Jan 14 00:06:03.307572 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 00:06:03.310779 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 00:06:03.316145 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 00:06:03.329749 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 00:06:03.342761 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 00:06:03.342824 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 00:06:03.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.351031 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 00:06:03.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.351064 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 00:06:03.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.359794 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 00:06:03.359810 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 00:06:03.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.367506 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 00:06:03.367557 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 00:06:03.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.375509 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 00:06:03.375542 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 00:06:03.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.384341 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 00:06:03.392936 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 00:06:03.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.403454 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 00:06:03.407613 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 00:06:03.410708 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 00:06:03.423138 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 00:06:03.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.423228 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 00:06:03.653565 kernel: hv_netvsc 7ced8dd0-5241-7ced-8dd0-52417ced8dd0 eth0: Data path switched from VF: enP25661s1 Jan 14 00:06:03.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.436574 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 00:06:03.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.444192 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 00:06:03.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.444231 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 00:06:03.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.454300 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 00:06:03.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.460899 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 00:06:03.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.460966 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 00:06:03.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.461322 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 00:06:03.461363 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 00:06:03.461583 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 00:06:03.461608 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 00:06:03.461863 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 00:06:03.482231 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 00:06:03.483043 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 00:06:03.557122 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 00:06:03.557163 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 00:06:03.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:03.565282 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 00:06:03.565308 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 00:06:03.569409 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 00:06:03.569452 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 00:06:03.581607 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 00:06:03.581652 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 00:06:03.594080 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 00:06:03.594132 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 00:06:03.608972 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 00:06:03.626852 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 00:06:03.626919 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 00:06:03.638180 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 00:06:03.638232 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 00:06:03.649427 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 00:06:03.649473 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 00:06:03.658588 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 00:06:03.658635 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 00:06:03.667985 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 00:06:03.668029 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:06:03.677266 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 00:06:03.862439 systemd-journald[482]: Received SIGTERM from PID 1 (systemd). Jan 14 00:06:03.677369 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 00:06:03.684467 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 00:06:03.684549 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 00:06:03.694576 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 00:06:03.694697 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 00:06:03.739346 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 00:06:03.739651 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 00:06:03.748549 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 00:06:03.757656 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 00:06:03.789133 systemd[1]: Switching root. Jan 14 00:06:03.902058 systemd-journald[482]: Journal stopped Jan 14 00:06:05.960817 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 00:06:05.960837 kernel: SELinux: policy capability open_perms=1 Jan 14 00:06:05.960845 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 00:06:05.960851 kernel: SELinux: policy capability always_check_network=0 Jan 14 00:06:05.960859 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 00:06:05.960866 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 00:06:05.960873 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 00:06:05.960878 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 00:06:05.960884 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 00:06:05.960891 systemd[1]: Successfully loaded SELinux policy in 89.184ms. Jan 14 00:06:05.960899 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.566ms. Jan 14 00:06:05.960906 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 00:06:05.960913 systemd[1]: Detected virtualization microsoft. Jan 14 00:06:05.960919 systemd[1]: Detected architecture arm64. Jan 14 00:06:05.960927 systemd[1]: Detected first boot. Jan 14 00:06:05.960933 systemd[1]: Hostname set to . Jan 14 00:06:05.960939 systemd[1]: Initializing machine ID from random generator. Jan 14 00:06:05.960946 zram_generator::config[1423]: No configuration found. Jan 14 00:06:05.960953 kernel: NET: Registered PF_VSOCK protocol family Jan 14 00:06:05.960960 systemd[1]: Populated /etc with preset unit settings. Jan 14 00:06:05.960966 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 00:06:05.960973 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 00:06:05.960979 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 00:06:05.960986 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 00:06:05.961007 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 00:06:05.961016 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 00:06:05.961023 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 00:06:05.961030 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 00:06:05.961036 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 00:06:05.961043 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 00:06:05.961049 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 00:06:05.961056 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 00:06:05.961063 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 00:06:05.961069 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 00:06:05.961075 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 00:06:05.961082 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 00:06:05.961088 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 00:06:05.961095 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 14 00:06:05.961102 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 00:06:05.961109 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 00:06:05.961117 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 00:06:05.961124 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 00:06:05.961130 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 00:06:05.961137 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 00:06:05.961144 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 00:06:05.961151 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 00:06:05.961157 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 00:06:05.961164 systemd[1]: Reached target slices.target - Slice Units. Jan 14 00:06:05.961171 systemd[1]: Reached target swap.target - Swaps. Jan 14 00:06:05.961177 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 00:06:05.961184 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 00:06:05.961192 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 00:06:05.961198 kernel: kauditd_printk_skb: 61 callbacks suppressed Jan 14 00:06:05.961205 kernel: audit: type=1335 audit(1768349165.444:104): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 00:06:05.961213 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 00:06:05.961219 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 00:06:05.961226 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 00:06:05.961233 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 00:06:05.961239 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 00:06:05.961246 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 00:06:05.961252 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 00:06:05.961260 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 00:06:05.961267 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 00:06:05.961273 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 00:06:05.961280 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 00:06:05.961287 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 00:06:05.961293 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 00:06:05.961301 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 00:06:05.961308 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 00:06:05.961315 systemd[1]: Reached target machines.target - Containers. Jan 14 00:06:05.961322 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 00:06:05.961329 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 00:06:05.961335 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 00:06:05.961342 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 00:06:05.961349 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 00:06:05.961356 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 00:06:05.961363 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 00:06:05.961369 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 00:06:05.961376 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 00:06:05.961383 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 00:06:05.961390 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 00:06:05.961397 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 00:06:05.961404 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 00:06:05.961411 kernel: audit: type=1131 audit(1768349165.817:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:05.961417 kernel: ACPI: bus type drm_connector registered Jan 14 00:06:05.961423 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 00:06:05.961430 kernel: fuse: init (API version 7.41) Jan 14 00:06:05.961437 kernel: audit: type=1131 audit(1768349165.847:106): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:05.961444 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 00:06:05.961451 kernel: audit: type=1334 audit(1768349165.872:107): prog-id=14 op=UNLOAD Jan 14 00:06:05.961457 kernel: audit: type=1334 audit(1768349165.872:108): prog-id=13 op=UNLOAD Jan 14 00:06:05.961463 kernel: audit: type=1334 audit(1768349165.876:109): prog-id=15 op=LOAD Jan 14 00:06:05.961469 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 00:06:05.961477 kernel: audit: type=1334 audit(1768349165.881:110): prog-id=16 op=LOAD Jan 14 00:06:05.961483 kernel: audit: type=1334 audit(1768349165.881:111): prog-id=17 op=LOAD Jan 14 00:06:05.961489 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 00:06:05.961496 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 00:06:05.961516 systemd-journald[1523]: Collecting audit messages is enabled. Jan 14 00:06:05.961532 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 00:06:05.961539 kernel: audit: type=1305 audit(1768349165.951:112): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 00:06:05.961546 systemd-journald[1523]: Journal started Jan 14 00:06:05.961562 systemd-journald[1523]: Runtime Journal (/run/log/journal/220ad8394d304ac0a12c711c9afb6a78) is 8M, max 78.3M, 70.3M free. Jan 14 00:06:05.444000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 00:06:05.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:05.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:05.872000 audit: BPF prog-id=14 op=UNLOAD Jan 14 00:06:05.872000 audit: BPF prog-id=13 op=UNLOAD Jan 14 00:06:05.876000 audit: BPF prog-id=15 op=LOAD Jan 14 00:06:05.881000 audit: BPF prog-id=16 op=LOAD Jan 14 00:06:05.881000 audit: BPF prog-id=17 op=LOAD Jan 14 00:06:05.951000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 00:06:05.113616 systemd[1]: Queued start job for default target multi-user.target. Jan 14 00:06:05.120467 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 14 00:06:05.120897 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 00:06:05.121207 systemd[1]: systemd-journald.service: Consumed 2.341s CPU time. Jan 14 00:06:05.951000 audit[1523]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffff48e8640 a2=4000 a3=0 items=0 ppid=1 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:05.986743 kernel: audit: type=1300 audit(1768349165.951:112): arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffff48e8640 a2=4000 a3=0 items=0 ppid=1 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:05.951000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 00:06:05.999524 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 00:06:06.019183 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 00:06:06.031265 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 00:06:06.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.034318 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 00:06:06.038970 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 00:06:06.045701 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 00:06:06.049938 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 00:06:06.055635 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 00:06:06.060957 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 00:06:06.065334 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 00:06:06.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.070929 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 00:06:06.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.076626 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 00:06:06.076754 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 00:06:06.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.082111 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 00:06:06.082235 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 00:06:06.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.087093 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 00:06:06.087217 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 00:06:06.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.091835 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 00:06:06.091959 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 00:06:06.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.097133 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 00:06:06.097262 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 00:06:06.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.102459 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 00:06:06.102582 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 00:06:06.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.107717 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 00:06:06.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.113209 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 00:06:06.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.119805 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 00:06:06.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.125307 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 00:06:06.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.132301 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 00:06:06.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.146525 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 00:06:06.152020 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 00:06:06.158353 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 00:06:06.169481 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 00:06:06.174153 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 00:06:06.174178 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 00:06:06.179800 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 00:06:06.185137 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 00:06:06.185221 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 00:06:06.189819 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 00:06:06.195688 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 00:06:06.200313 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 00:06:06.201050 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 00:06:06.205482 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 00:06:06.206418 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 00:06:06.214936 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 00:06:06.222162 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 00:06:06.222558 systemd-journald[1523]: Time spent on flushing to /var/log/journal/220ad8394d304ac0a12c711c9afb6a78 is 36.175ms for 1063 entries. Jan 14 00:06:06.222558 systemd-journald[1523]: System Journal (/var/log/journal/220ad8394d304ac0a12c711c9afb6a78) is 8M, max 2.2G, 2.2G free. Jan 14 00:06:06.382562 systemd-journald[1523]: Received client request to flush runtime journal. Jan 14 00:06:06.382625 kernel: loop1: detected capacity change from 0 to 200800 Jan 14 00:06:06.382655 kernel: loop2: detected capacity change from 0 to 100192 Jan 14 00:06:06.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.234028 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 00:06:06.240932 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 00:06:06.246914 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 00:06:06.253756 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 00:06:06.266164 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 00:06:06.276935 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 00:06:06.296668 systemd-tmpfiles[1565]: ACLs are not supported, ignoring. Jan 14 00:06:06.296676 systemd-tmpfiles[1565]: ACLs are not supported, ignoring. Jan 14 00:06:06.301336 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 00:06:06.314130 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 00:06:06.384105 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 00:06:06.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:08.234135 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 00:06:08.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:08.239000 audit: BPF prog-id=18 op=LOAD Jan 14 00:06:08.239000 audit: BPF prog-id=19 op=LOAD Jan 14 00:06:08.239000 audit: BPF prog-id=20 op=LOAD Jan 14 00:06:08.242265 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 00:06:08.249000 audit: BPF prog-id=21 op=LOAD Jan 14 00:06:08.253195 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 00:06:08.263135 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 00:06:08.274000 audit: BPF prog-id=22 op=LOAD Jan 14 00:06:08.275000 audit: BPF prog-id=23 op=LOAD Jan 14 00:06:08.275000 audit: BPF prog-id=24 op=LOAD Jan 14 00:06:08.276636 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 00:06:08.284000 audit: BPF prog-id=25 op=LOAD Jan 14 00:06:08.287205 systemd-tmpfiles[1584]: ACLs are not supported, ignoring. Jan 14 00:06:08.287215 systemd-tmpfiles[1584]: ACLs are not supported, ignoring. Jan 14 00:06:08.286000 audit: BPF prog-id=26 op=LOAD Jan 14 00:06:08.286000 audit: BPF prog-id=27 op=LOAD Jan 14 00:06:08.289573 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 00:06:08.300152 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 00:06:08.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:08.325100 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 00:06:08.326705 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 00:06:08.331257 systemd-nsresourced[1586]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 00:06:08.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:08.335541 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 00:06:08.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:08.377076 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 00:06:08.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:08.490092 systemd-oomd[1582]: No swap; memory pressure usage will be degraded Jan 14 00:06:08.490665 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 00:06:08.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:08.497971 systemd-resolved[1583]: Positive Trust Anchors: Jan 14 00:06:08.500030 systemd-resolved[1583]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 00:06:08.500039 systemd-resolved[1583]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 00:06:08.500066 systemd-resolved[1583]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 00:06:09.332468 systemd-resolved[1583]: Using system hostname 'ci-4547.0.0-n-16ff4e9fd7'. Jan 14 00:06:09.333634 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 00:06:09.340955 kernel: loop3: detected capacity change from 0 to 45344 Jan 14 00:06:09.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:09.341387 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 00:06:09.394048 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 00:06:09.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:09.398000 audit: BPF prog-id=8 op=UNLOAD Jan 14 00:06:09.398000 audit: BPF prog-id=7 op=UNLOAD Jan 14 00:06:09.399000 audit: BPF prog-id=28 op=LOAD Jan 14 00:06:09.399000 audit: BPF prog-id=29 op=LOAD Jan 14 00:06:09.400956 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 00:06:09.426813 systemd-udevd[1606]: Using default interface naming scheme 'v257'. Jan 14 00:06:09.985342 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 00:06:09.991057 kernel: loop4: detected capacity change from 0 to 27544 Jan 14 00:06:09.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:09.993000 audit: BPF prog-id=30 op=LOAD Jan 14 00:06:09.995104 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 00:06:10.056065 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 14 00:06:10.241070 kernel: hv_vmbus: registering driver hv_balloon Jan 14 00:06:10.242113 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 14 00:06:10.248638 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 14 00:06:10.288022 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#15 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 14 00:06:10.312044 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 00:06:10.359788 kernel: hv_vmbus: registering driver hyperv_fb Jan 14 00:06:10.359879 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 14 00:06:10.365526 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 14 00:06:10.369452 kernel: Console: switching to colour dummy device 80x25 Jan 14 00:06:10.376028 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 00:06:10.730246 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:06:10.741198 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 00:06:10.741364 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:06:10.764562 kernel: kauditd_printk_skb: 49 callbacks suppressed Jan 14 00:06:10.764656 kernel: audit: type=1130 audit(1768349170.745:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:10.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:10.751167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:06:10.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:10.782197 kernel: audit: type=1131 audit(1768349170.745:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:10.786567 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 00:06:10.786782 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:06:10.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:10.793475 systemd-networkd[1621]: lo: Link UP Jan 14 00:06:10.793486 systemd-networkd[1621]: lo: Gained carrier Jan 14 00:06:10.795043 systemd-networkd[1621]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:06:10.795046 systemd-networkd[1621]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 00:06:10.795661 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 00:06:10.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:10.824007 kernel: audit: type=1130 audit(1768349170.792:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:10.824060 kernel: audit: type=1131 audit(1768349170.794:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:10.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:10.824667 systemd[1]: Reached target network.target - Network. Jan 14 00:06:10.836162 kernel: audit: type=1130 audit(1768349170.823:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:10.845122 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 00:06:10.853179 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 00:06:10.864512 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:06:10.906294 kernel: mlx5_core 643d:00:02.0 enP25661s1: Link up Jan 14 00:06:10.934245 kernel: hv_netvsc 7ced8dd0-5241-7ced-8dd0-52417ced8dd0 eth0: Data path switched to VF: enP25661s1 Jan 14 00:06:10.933837 systemd-networkd[1621]: enP25661s1: Link UP Jan 14 00:06:10.933974 systemd-networkd[1621]: eth0: Link UP Jan 14 00:06:10.933978 systemd-networkd[1621]: eth0: Gained carrier Jan 14 00:06:10.934003 systemd-networkd[1621]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:06:10.937480 systemd-networkd[1621]: enP25661s1: Gained carrier Jan 14 00:06:10.946055 systemd-networkd[1621]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 14 00:06:11.028715 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 00:06:11.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:11.054005 kernel: audit: type=1130 audit(1768349171.035:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:11.058916 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 00:06:11.065014 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 00:06:11.096231 kernel: MACsec IEEE 802.1AE Jan 14 00:06:11.163256 kernel: loop5: detected capacity change from 0 to 200800 Jan 14 00:06:11.182050 kernel: loop6: detected capacity change from 0 to 100192 Jan 14 00:06:11.284189 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 00:06:11.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:11.304017 kernel: audit: type=1130 audit(1768349171.288:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:11.533083 kernel: loop7: detected capacity change from 0 to 45344 Jan 14 00:06:11.631043 kernel: loop1: detected capacity change from 0 to 27544 Jan 14 00:06:11.787258 (sd-merge)[1736]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Jan 14 00:06:11.789708 (sd-merge)[1736]: Merged extensions into '/usr'. Jan 14 00:06:11.792646 systemd[1]: Reload requested from client PID 1563 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 00:06:11.792871 systemd[1]: Reloading... Jan 14 00:06:11.860115 zram_generator::config[1781]: No configuration found. Jan 14 00:06:12.226622 systemd[1]: Reloading finished in 433 ms. Jan 14 00:06:12.256849 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 00:06:12.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:12.275021 kernel: audit: type=1130 audit(1768349172.261:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:12.274619 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:06:12.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:12.293021 kernel: audit: type=1130 audit(1768349172.278:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:12.299190 systemd[1]: Starting ensure-sysext.service... Jan 14 00:06:12.304136 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 00:06:12.311000 audit: BPF prog-id=31 op=LOAD Jan 14 00:06:12.311000 audit: BPF prog-id=21 op=UNLOAD Jan 14 00:06:12.318012 kernel: audit: type=1334 audit(1768349172.311:170): prog-id=31 op=LOAD Jan 14 00:06:12.317000 audit: BPF prog-id=32 op=LOAD Jan 14 00:06:12.317000 audit: BPF prog-id=25 op=UNLOAD Jan 14 00:06:12.317000 audit: BPF prog-id=33 op=LOAD Jan 14 00:06:12.317000 audit: BPF prog-id=34 op=LOAD Jan 14 00:06:12.317000 audit: BPF prog-id=26 op=UNLOAD Jan 14 00:06:12.317000 audit: BPF prog-id=27 op=UNLOAD Jan 14 00:06:12.317000 audit: BPF prog-id=35 op=LOAD Jan 14 00:06:12.317000 audit: BPF prog-id=36 op=LOAD Jan 14 00:06:12.317000 audit: BPF prog-id=28 op=UNLOAD Jan 14 00:06:12.317000 audit: BPF prog-id=29 op=UNLOAD Jan 14 00:06:12.318000 audit: BPF prog-id=37 op=LOAD Jan 14 00:06:12.318000 audit: BPF prog-id=15 op=UNLOAD Jan 14 00:06:12.318000 audit: BPF prog-id=38 op=LOAD Jan 14 00:06:12.318000 audit: BPF prog-id=39 op=LOAD Jan 14 00:06:12.318000 audit: BPF prog-id=16 op=UNLOAD Jan 14 00:06:12.318000 audit: BPF prog-id=17 op=UNLOAD Jan 14 00:06:12.318000 audit: BPF prog-id=40 op=LOAD Jan 14 00:06:12.318000 audit: BPF prog-id=30 op=UNLOAD Jan 14 00:06:12.319000 audit: BPF prog-id=41 op=LOAD Jan 14 00:06:12.319000 audit: BPF prog-id=18 op=UNLOAD Jan 14 00:06:12.319000 audit: BPF prog-id=42 op=LOAD Jan 14 00:06:12.319000 audit: BPF prog-id=43 op=LOAD Jan 14 00:06:12.319000 audit: BPF prog-id=19 op=UNLOAD Jan 14 00:06:12.319000 audit: BPF prog-id=20 op=UNLOAD Jan 14 00:06:12.320000 audit: BPF prog-id=44 op=LOAD Jan 14 00:06:12.320000 audit: BPF prog-id=22 op=UNLOAD Jan 14 00:06:12.320000 audit: BPF prog-id=45 op=LOAD Jan 14 00:06:12.320000 audit: BPF prog-id=46 op=LOAD Jan 14 00:06:12.320000 audit: BPF prog-id=23 op=UNLOAD Jan 14 00:06:12.320000 audit: BPF prog-id=24 op=UNLOAD Jan 14 00:06:12.326175 systemd[1]: Reload requested from client PID 1829 ('systemctl') (unit ensure-sysext.service)... Jan 14 00:06:12.326191 systemd[1]: Reloading... Jan 14 00:06:12.326603 systemd-tmpfiles[1830]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 00:06:12.326623 systemd-tmpfiles[1830]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 00:06:12.326808 systemd-tmpfiles[1830]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 00:06:12.327765 systemd-tmpfiles[1830]: ACLs are not supported, ignoring. Jan 14 00:06:12.327889 systemd-tmpfiles[1830]: ACLs are not supported, ignoring. Jan 14 00:06:12.333312 systemd-tmpfiles[1830]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 00:06:12.333418 systemd-tmpfiles[1830]: Skipping /boot Jan 14 00:06:12.340613 systemd-tmpfiles[1830]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 00:06:12.340709 systemd-tmpfiles[1830]: Skipping /boot Jan 14 00:06:12.385029 zram_generator::config[1864]: No configuration found. Jan 14 00:06:12.539648 systemd[1]: Reloading finished in 213 ms. Jan 14 00:06:12.551000 audit: BPF prog-id=47 op=LOAD Jan 14 00:06:12.551000 audit: BPF prog-id=31 op=UNLOAD Jan 14 00:06:12.551000 audit: BPF prog-id=48 op=LOAD Jan 14 00:06:12.551000 audit: BPF prog-id=49 op=LOAD Jan 14 00:06:12.551000 audit: BPF prog-id=35 op=UNLOAD Jan 14 00:06:12.551000 audit: BPF prog-id=36 op=UNLOAD Jan 14 00:06:12.552000 audit: BPF prog-id=50 op=LOAD Jan 14 00:06:12.552000 audit: BPF prog-id=37 op=UNLOAD Jan 14 00:06:12.552000 audit: BPF prog-id=51 op=LOAD Jan 14 00:06:12.552000 audit: BPF prog-id=52 op=LOAD Jan 14 00:06:12.552000 audit: BPF prog-id=38 op=UNLOAD Jan 14 00:06:12.552000 audit: BPF prog-id=39 op=UNLOAD Jan 14 00:06:12.553000 audit: BPF prog-id=53 op=LOAD Jan 14 00:06:12.553000 audit: BPF prog-id=40 op=UNLOAD Jan 14 00:06:12.553000 audit: BPF prog-id=54 op=LOAD Jan 14 00:06:12.553000 audit: BPF prog-id=32 op=UNLOAD Jan 14 00:06:12.553000 audit: BPF prog-id=55 op=LOAD Jan 14 00:06:12.553000 audit: BPF prog-id=56 op=LOAD Jan 14 00:06:12.553000 audit: BPF prog-id=33 op=UNLOAD Jan 14 00:06:12.553000 audit: BPF prog-id=34 op=UNLOAD Jan 14 00:06:12.554000 audit: BPF prog-id=57 op=LOAD Jan 14 00:06:12.554000 audit: BPF prog-id=44 op=UNLOAD Jan 14 00:06:12.554000 audit: BPF prog-id=58 op=LOAD Jan 14 00:06:12.554000 audit: BPF prog-id=59 op=LOAD Jan 14 00:06:12.554000 audit: BPF prog-id=45 op=UNLOAD Jan 14 00:06:12.554000 audit: BPF prog-id=46 op=UNLOAD Jan 14 00:06:12.554000 audit: BPF prog-id=60 op=LOAD Jan 14 00:06:12.554000 audit: BPF prog-id=41 op=UNLOAD Jan 14 00:06:12.554000 audit: BPF prog-id=61 op=LOAD Jan 14 00:06:12.554000 audit: BPF prog-id=62 op=LOAD Jan 14 00:06:12.554000 audit: BPF prog-id=42 op=UNLOAD Jan 14 00:06:12.554000 audit: BPF prog-id=43 op=UNLOAD Jan 14 00:06:12.567067 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 00:06:12.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:12.579155 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 00:06:12.589816 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 00:06:12.597374 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 00:06:12.607137 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 00:06:12.615712 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 00:06:12.624889 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 00:06:12.626907 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 00:06:12.633382 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 00:06:12.632000 audit[1931]: SYSTEM_BOOT pid=1931 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 00:06:12.639869 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 00:06:12.644534 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 00:06:12.645542 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 00:06:12.646466 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 00:06:12.649758 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 00:06:12.649899 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 00:06:12.650003 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 00:06:12.650069 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 00:06:12.652094 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 00:06:12.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:12.661689 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 00:06:12.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:12.669755 systemd[1]: Finished ensure-sysext.service. Jan 14 00:06:12.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:12.673981 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 00:06:12.675067 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 00:06:12.679440 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 00:06:12.679518 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 00:06:12.679545 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 00:06:12.679585 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 00:06:12.818202 systemd-networkd[1621]: eth0: Gained IPv6LL Jan 14 00:06:12.820854 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 00:06:12.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:12.827100 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 00:06:12.832019 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 00:06:12.832224 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 00:06:12.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:12.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:12.838151 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 00:06:12.838301 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 00:06:12.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:12.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:12.843958 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 00:06:12.844103 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 00:06:12.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:12.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:12.849065 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 00:06:12.849227 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 00:06:12.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:12.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:12.856058 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 00:06:12.856135 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 00:06:12.989000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 00:06:12.989000 audit[1957]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe4f1d370 a2=420 a3=0 items=0 ppid=1921 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:12.989000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 00:06:12.990892 augenrules[1957]: No rules Jan 14 00:06:12.991864 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 00:06:12.992295 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 00:06:14.182287 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 00:06:14.187777 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 00:06:20.054782 ldconfig[1923]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 00:06:20.241276 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 00:06:20.247603 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 00:06:20.383921 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 00:06:20.389047 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 00:06:20.393822 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 00:06:20.398899 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 00:06:20.404434 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 00:06:20.409089 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 00:06:20.414157 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 00:06:20.419151 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 00:06:20.424189 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 00:06:20.429704 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 00:06:20.429734 systemd[1]: Reached target paths.target - Path Units. Jan 14 00:06:20.433520 systemd[1]: Reached target timers.target - Timer Units. Jan 14 00:06:20.444712 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 00:06:20.450927 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 00:06:20.456488 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 00:06:20.461926 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 00:06:20.466863 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 00:06:20.472855 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 00:06:20.477188 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 00:06:20.482278 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 00:06:20.486752 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 00:06:20.490487 systemd[1]: Reached target basic.target - Basic System. Jan 14 00:06:20.494302 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 00:06:20.494329 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 00:06:20.496493 systemd[1]: Starting chronyd.service - NTP client/server... Jan 14 00:06:20.510107 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 00:06:20.515031 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 00:06:20.525136 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 00:06:20.532157 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 00:06:20.538386 chronyd[1969]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 14 00:06:20.540044 chronyd[1969]: Timezone right/UTC failed leap second check, ignoring Jan 14 00:06:20.540192 chronyd[1969]: Loaded seccomp filter (level 2) Jan 14 00:06:20.540772 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 00:06:20.549942 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 00:06:20.554188 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 00:06:20.555666 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 14 00:06:20.556626 jq[1977]: false Jan 14 00:06:20.559939 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 14 00:06:20.562096 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:06:20.568471 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 00:06:20.569620 KVP[1979]: KVP starting; pid is:1979 Jan 14 00:06:20.574358 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 00:06:20.580330 KVP[1979]: KVP LIC Version: 3.1 Jan 14 00:06:20.580893 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 00:06:20.581122 kernel: hv_utils: KVP IC version 4.0 Jan 14 00:06:20.581984 extend-filesystems[1978]: Found /dev/sda6 Jan 14 00:06:20.595019 extend-filesystems[1978]: Found /dev/sda9 Jan 14 00:06:20.595019 extend-filesystems[1978]: Checking size of /dev/sda9 Jan 14 00:06:20.593386 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 00:06:20.618284 extend-filesystems[1978]: Resized partition /dev/sda9 Jan 14 00:06:20.604519 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 00:06:20.624114 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 00:06:20.633319 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 00:06:20.633703 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 00:06:20.635276 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 00:06:20.639486 extend-filesystems[2003]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 00:06:20.643886 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 00:06:20.652014 jq[2010]: true Jan 14 00:06:20.655080 systemd[1]: Started chronyd.service - NTP client/server. Jan 14 00:06:20.662340 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 00:06:20.668485 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 00:06:20.668852 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 00:06:20.676874 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 00:06:20.677086 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 00:06:20.684447 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 00:06:20.690177 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 00:06:20.714094 jq[2042]: true Jan 14 00:06:21.093125 update_engine[2007]: I20260114 00:06:21.092777 2007 main.cc:92] Flatcar Update Engine starting Jan 14 00:06:21.228380 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 00:06:21.246952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:06:21.255460 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:06:21.351074 kernel: EXT4-fs (sda9): resizing filesystem from 6359552 to 6376955 blocks Jan 14 00:06:21.351161 kernel: EXT4-fs (sda9): resized filesystem to 6376955 Jan 14 00:06:21.384322 systemd-logind[1997]: New seat seat0. Jan 14 00:06:21.594515 dbus-daemon[1972]: [system] SELinux support is enabled Jan 14 00:06:22.183660 coreos-metadata[1971]: Jan 14 00:06:21.641 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 00:06:22.183660 coreos-metadata[1971]: Jan 14 00:06:21.724 INFO Fetch successful Jan 14 00:06:22.183660 coreos-metadata[1971]: Jan 14 00:06:21.724 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 14 00:06:22.183660 coreos-metadata[1971]: Jan 14 00:06:21.724 INFO Fetch successful Jan 14 00:06:22.183660 coreos-metadata[1971]: Jan 14 00:06:21.724 INFO Fetching http://168.63.129.16/machine/e04c37b3-7873-45de-84b7-560e281c6cd9/35d39c17%2D0f41%2D4c1e%2D8309%2D76a23e986734.%5Fci%2D4547.0.0%2Dn%2D16ff4e9fd7?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 14 00:06:22.183660 coreos-metadata[1971]: Jan 14 00:06:21.724 INFO Fetch successful Jan 14 00:06:22.183660 coreos-metadata[1971]: Jan 14 00:06:21.724 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 14 00:06:22.183660 coreos-metadata[1971]: Jan 14 00:06:21.724 INFO Fetch successful Jan 14 00:06:21.578297 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:06:22.183938 update_engine[2007]: I20260114 00:06:21.601013 2007 update_check_scheduler.cc:74] Next update check in 10m17s Jan 14 00:06:22.184115 kubelet[2091]: E0114 00:06:21.576502 2091 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:06:21.578405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:06:21.578713 systemd[1]: kubelet.service: Consumed 491ms CPU time, 247.2M memory peak. Jan 14 00:06:21.594966 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 00:06:21.604507 systemd[1]: Started update-engine.service - Update Engine. Jan 14 00:06:21.609310 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 00:06:21.609333 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 00:06:21.614875 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 00:06:21.614886 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 00:06:21.621015 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 00:06:21.745037 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 00:06:21.750017 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 00:06:22.182936 systemd-logind[1997]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 14 00:06:22.183440 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 00:06:22.305937 tar[2036]: linux-arm64/LICENSE Jan 14 00:06:22.877746 containerd[2043]: time="2026-01-14T00:06:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 00:06:22.879754 tar[2036]: linux-arm64/helm Jan 14 00:06:22.880247 containerd[2043]: time="2026-01-14T00:06:22.880152428Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 00:06:22.884527 extend-filesystems[2003]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 14 00:06:22.884527 extend-filesystems[2003]: old_desc_blocks = 4, new_desc_blocks = 4 Jan 14 00:06:22.884527 extend-filesystems[2003]: The filesystem on /dev/sda9 is now 6376955 (4k) blocks long. Jan 14 00:06:22.915378 extend-filesystems[1978]: Resized filesystem in /dev/sda9 Jan 14 00:06:22.926451 containerd[2043]: time="2026-01-14T00:06:22.909182972Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.712µs" Jan 14 00:06:22.926451 containerd[2043]: time="2026-01-14T00:06:22.909217636Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 00:06:22.926451 containerd[2043]: time="2026-01-14T00:06:22.909258308Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 00:06:22.926451 containerd[2043]: time="2026-01-14T00:06:22.909269964Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 00:06:22.926451 containerd[2043]: time="2026-01-14T00:06:22.923704988Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 00:06:22.926451 containerd[2043]: time="2026-01-14T00:06:22.923742340Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 00:06:22.926451 containerd[2043]: time="2026-01-14T00:06:22.923794012Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 00:06:22.926451 containerd[2043]: time="2026-01-14T00:06:22.923801772Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 00:06:22.926451 containerd[2043]: time="2026-01-14T00:06:22.924559836Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 00:06:22.926451 containerd[2043]: time="2026-01-14T00:06:22.924579812Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 00:06:22.926451 containerd[2043]: time="2026-01-14T00:06:22.924590324Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 00:06:22.926451 containerd[2043]: time="2026-01-14T00:06:22.924595940Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 00:06:22.891386 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 00:06:22.926785 bash[2073]: Updated "/home/core/.ssh/authorized_keys" Jan 14 00:06:22.932551 containerd[2043]: time="2026-01-14T00:06:22.924745228Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 00:06:22.932551 containerd[2043]: time="2026-01-14T00:06:22.924767444Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 00:06:22.932551 containerd[2043]: time="2026-01-14T00:06:22.924819476Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 00:06:22.932551 containerd[2043]: time="2026-01-14T00:06:22.924945412Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 00:06:22.932551 containerd[2043]: time="2026-01-14T00:06:22.924964212Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 00:06:22.932551 containerd[2043]: time="2026-01-14T00:06:22.924970044Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 00:06:22.932551 containerd[2043]: time="2026-01-14T00:06:22.925258660Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 00:06:22.932551 containerd[2043]: time="2026-01-14T00:06:22.925437692Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 00:06:22.932551 containerd[2043]: time="2026-01-14T00:06:22.925502060Z" level=info msg="metadata content store policy set" policy=shared Jan 14 00:06:22.892486 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 00:06:22.921525 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 00:06:22.935670 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 00:06:22.941930 locksmithd[2131]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 00:06:22.944791 containerd[2043]: time="2026-01-14T00:06:22.944739924Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 00:06:22.944978 containerd[2043]: time="2026-01-14T00:06:22.944861972Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 00:06:22.945973 containerd[2043]: time="2026-01-14T00:06:22.945255564Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 00:06:22.945973 containerd[2043]: time="2026-01-14T00:06:22.945285540Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 00:06:22.945973 containerd[2043]: time="2026-01-14T00:06:22.945306308Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 00:06:22.945973 containerd[2043]: time="2026-01-14T00:06:22.945314964Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 00:06:22.945973 containerd[2043]: time="2026-01-14T00:06:22.945324020Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 00:06:22.945973 containerd[2043]: time="2026-01-14T00:06:22.945330148Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 00:06:22.945973 containerd[2043]: time="2026-01-14T00:06:22.945337980Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 00:06:22.945973 containerd[2043]: time="2026-01-14T00:06:22.945346236Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 00:06:22.945973 containerd[2043]: time="2026-01-14T00:06:22.945352468Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 00:06:22.945973 containerd[2043]: time="2026-01-14T00:06:22.945360620Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 00:06:22.945973 containerd[2043]: time="2026-01-14T00:06:22.945367508Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 00:06:22.945973 containerd[2043]: time="2026-01-14T00:06:22.945379188Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 00:06:22.945973 containerd[2043]: time="2026-01-14T00:06:22.945486668Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 00:06:22.946190 containerd[2043]: time="2026-01-14T00:06:22.945500684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 00:06:22.946190 containerd[2043]: time="2026-01-14T00:06:22.945509948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 00:06:22.946190 containerd[2043]: time="2026-01-14T00:06:22.945516308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 00:06:22.946190 containerd[2043]: time="2026-01-14T00:06:22.945522884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 00:06:22.946190 containerd[2043]: time="2026-01-14T00:06:22.945529244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 00:06:22.946190 containerd[2043]: time="2026-01-14T00:06:22.945537060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 00:06:22.946190 containerd[2043]: time="2026-01-14T00:06:22.945546380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 00:06:22.946190 containerd[2043]: time="2026-01-14T00:06:22.945553380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 00:06:22.946190 containerd[2043]: time="2026-01-14T00:06:22.945560076Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 00:06:22.946190 containerd[2043]: time="2026-01-14T00:06:22.945566260Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 00:06:22.946190 containerd[2043]: time="2026-01-14T00:06:22.945584484Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 00:06:22.946190 containerd[2043]: time="2026-01-14T00:06:22.945615708Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 00:06:22.946190 containerd[2043]: time="2026-01-14T00:06:22.945626260Z" level=info msg="Start snapshots syncer" Jan 14 00:06:22.946755 containerd[2043]: time="2026-01-14T00:06:22.946731972Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 00:06:22.947828 containerd[2043]: time="2026-01-14T00:06:22.947372828Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 00:06:22.947828 containerd[2043]: time="2026-01-14T00:06:22.947422500Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 00:06:22.948235 containerd[2043]: time="2026-01-14T00:06:22.948201428Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 00:06:22.948625 containerd[2043]: time="2026-01-14T00:06:22.948419308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 00:06:22.948718 containerd[2043]: time="2026-01-14T00:06:22.948703028Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 00:06:22.948935 containerd[2043]: time="2026-01-14T00:06:22.948918228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 00:06:22.949033 containerd[2043]: time="2026-01-14T00:06:22.948983060Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 00:06:22.949095 containerd[2043]: time="2026-01-14T00:06:22.949074132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 00:06:22.949381 containerd[2043]: time="2026-01-14T00:06:22.949277404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 00:06:22.949381 containerd[2043]: time="2026-01-14T00:06:22.949294868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 00:06:22.949381 containerd[2043]: time="2026-01-14T00:06:22.949302716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 00:06:22.949381 containerd[2043]: time="2026-01-14T00:06:22.949310084Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 00:06:22.949639 containerd[2043]: time="2026-01-14T00:06:22.949582684Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 00:06:22.949639 containerd[2043]: time="2026-01-14T00:06:22.949606132Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 00:06:22.949639 containerd[2043]: time="2026-01-14T00:06:22.949612372Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 00:06:22.949639 containerd[2043]: time="2026-01-14T00:06:22.949618564Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 00:06:22.950056 containerd[2043]: time="2026-01-14T00:06:22.949623572Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 00:06:22.950056 containerd[2043]: time="2026-01-14T00:06:22.949906612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 00:06:22.950056 containerd[2043]: time="2026-01-14T00:06:22.949916300Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 00:06:22.950056 containerd[2043]: time="2026-01-14T00:06:22.949929628Z" level=info msg="runtime interface created" Jan 14 00:06:22.950056 containerd[2043]: time="2026-01-14T00:06:22.949933124Z" level=info msg="created NRI interface" Jan 14 00:06:22.950056 containerd[2043]: time="2026-01-14T00:06:22.949939604Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 00:06:22.950056 containerd[2043]: time="2026-01-14T00:06:22.949949356Z" level=info msg="Connect containerd service" Jan 14 00:06:22.950056 containerd[2043]: time="2026-01-14T00:06:22.949968684Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 00:06:22.952095 containerd[2043]: time="2026-01-14T00:06:22.951747836Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 00:06:22.992453 sshd_keygen[2014]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 00:06:23.012070 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 00:06:23.019741 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 00:06:23.027263 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 14 00:06:23.044875 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 00:06:23.045788 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 00:06:23.056277 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 00:06:23.118211 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 14 00:06:23.126365 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 00:06:23.137517 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 00:06:23.147135 containerd[2043]: time="2026-01-14T00:06:23.147092524Z" level=info msg="Start subscribing containerd event" Jan 14 00:06:23.147261 containerd[2043]: time="2026-01-14T00:06:23.147248244Z" level=info msg="Start recovering state" Jan 14 00:06:23.147384 containerd[2043]: time="2026-01-14T00:06:23.147373028Z" level=info msg="Start event monitor" Jan 14 00:06:23.147510 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 14 00:06:23.147727 containerd[2043]: time="2026-01-14T00:06:23.147708748Z" level=info msg="Start cni network conf syncer for default" Jan 14 00:06:23.147786 containerd[2043]: time="2026-01-14T00:06:23.147776124Z" level=info msg="Start streaming server" Jan 14 00:06:23.147856 containerd[2043]: time="2026-01-14T00:06:23.147844084Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 00:06:23.147892 containerd[2043]: time="2026-01-14T00:06:23.147883548Z" level=info msg="runtime interface starting up..." Jan 14 00:06:23.147942 containerd[2043]: time="2026-01-14T00:06:23.147931148Z" level=info msg="starting plugins..." Jan 14 00:06:23.148004 containerd[2043]: time="2026-01-14T00:06:23.147984068Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 00:06:23.148370 containerd[2043]: time="2026-01-14T00:06:23.148351572Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 00:06:23.148478 containerd[2043]: time="2026-01-14T00:06:23.148464780Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 00:06:23.148555 containerd[2043]: time="2026-01-14T00:06:23.148545844Z" level=info msg="containerd successfully booted in 0.374875s" Jan 14 00:06:23.158429 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 00:06:23.164577 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 00:06:23.237351 tar[2036]: linux-arm64/README.md Jan 14 00:06:23.254026 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 00:06:23.261037 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 00:06:23.267398 systemd[1]: Startup finished in 1.774s (kernel) + 7.230s (initrd) + 19.076s (userspace) = 28.081s. Jan 14 00:06:23.702900 login[2185]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:23.703896 login[2184]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:23.712152 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 00:06:23.713117 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 00:06:23.713686 waagent[2182]: 2026-01-14T00:06:23.713562Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 14 00:06:23.718672 waagent[2182]: 2026-01-14T00:06:23.718624Z INFO Daemon Daemon OS: flatcar 4547.0.0 Jan 14 00:06:23.722236 systemd-logind[1997]: New session 1 of user core. Jan 14 00:06:23.726469 waagent[2182]: 2026-01-14T00:06:23.723252Z INFO Daemon Daemon Python: 3.11.13 Jan 14 00:06:23.727001 waagent[2182]: 2026-01-14T00:06:23.726950Z INFO Daemon Daemon Run daemon Jan 14 00:06:23.730457 waagent[2182]: 2026-01-14T00:06:23.730422Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4547.0.0' Jan 14 00:06:23.737421 waagent[2182]: 2026-01-14T00:06:23.737380Z INFO Daemon Daemon Using waagent for provisioning Jan 14 00:06:23.741559 systemd-logind[1997]: New session 2 of user core. Jan 14 00:06:23.742064 waagent[2182]: 2026-01-14T00:06:23.742027Z INFO Daemon Daemon Activate resource disk Jan 14 00:06:23.746730 waagent[2182]: 2026-01-14T00:06:23.746690Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 14 00:06:23.750859 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 00:06:23.755204 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 00:06:23.756564 waagent[2182]: 2026-01-14T00:06:23.756521Z INFO Daemon Daemon Found device: None Jan 14 00:06:23.760150 waagent[2182]: 2026-01-14T00:06:23.760111Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 14 00:06:23.766264 waagent[2182]: 2026-01-14T00:06:23.766220Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 14 00:06:23.775167 waagent[2182]: 2026-01-14T00:06:23.775118Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 00:06:23.779404 (systemd)[2199]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:23.780320 waagent[2182]: 2026-01-14T00:06:23.780282Z INFO Daemon Daemon Running default provisioning handler Jan 14 00:06:23.785414 systemd-logind[1997]: New session 3 of user core. Jan 14 00:06:23.790639 waagent[2182]: 2026-01-14T00:06:23.790600Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 14 00:06:23.800688 waagent[2182]: 2026-01-14T00:06:23.800650Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 14 00:06:23.807656 waagent[2182]: 2026-01-14T00:06:23.807618Z INFO Daemon Daemon cloud-init is enabled: False Jan 14 00:06:23.811692 waagent[2182]: 2026-01-14T00:06:23.811651Z INFO Daemon Daemon Copying ovf-env.xml Jan 14 00:06:23.850438 waagent[2182]: 2026-01-14T00:06:23.850384Z INFO Daemon Daemon Successfully mounted dvd Jan 14 00:06:23.865693 waagent[2182]: 2026-01-14T00:06:23.865655Z INFO Daemon Daemon Detect protocol endpoint Jan 14 00:06:23.866053 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 14 00:06:23.869519 waagent[2182]: 2026-01-14T00:06:23.869484Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 00:06:23.873730 waagent[2182]: 2026-01-14T00:06:23.873696Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 14 00:06:23.878392 waagent[2182]: 2026-01-14T00:06:23.878360Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 14 00:06:23.882695 waagent[2182]: 2026-01-14T00:06:23.882658Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 14 00:06:23.886412 waagent[2182]: 2026-01-14T00:06:23.886379Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 14 00:06:23.897750 waagent[2182]: 2026-01-14T00:06:23.897711Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 14 00:06:23.902962 waagent[2182]: 2026-01-14T00:06:23.902938Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 14 00:06:23.906938 waagent[2182]: 2026-01-14T00:06:23.906905Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 14 00:06:23.909204 systemd[2199]: Queued start job for default target default.target. Jan 14 00:06:23.914427 systemd[2199]: Created slice app.slice - User Application Slice. Jan 14 00:06:23.914454 systemd[2199]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 00:06:23.914464 systemd[2199]: Reached target paths.target - Paths. Jan 14 00:06:23.914850 systemd[2199]: Reached target timers.target - Timers. Jan 14 00:06:23.916372 systemd[2199]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 00:06:23.916931 systemd[2199]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 00:06:23.930209 systemd[2199]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 00:06:23.931628 systemd[2199]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 00:06:23.931789 systemd[2199]: Reached target sockets.target - Sockets. Jan 14 00:06:23.931895 systemd[2199]: Reached target basic.target - Basic System. Jan 14 00:06:23.932107 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 00:06:23.932123 systemd[2199]: Reached target default.target - Main User Target. Jan 14 00:06:23.932155 systemd[2199]: Startup finished in 140ms. Jan 14 00:06:23.937092 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 00:06:23.937587 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 14 00:06:23.992413 waagent[2182]: 2026-01-14T00:06:23.992345Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 14 00:06:23.997503 waagent[2182]: 2026-01-14T00:06:23.997458Z INFO Daemon Daemon Forcing an update of the goal state. Jan 14 00:06:24.005042 waagent[2182]: 2026-01-14T00:06:24.004957Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 00:06:24.025854 waagent[2182]: 2026-01-14T00:06:24.021626Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 14 00:06:24.026397 waagent[2182]: 2026-01-14T00:06:24.026356Z INFO Daemon Jan 14 00:06:24.028717 waagent[2182]: 2026-01-14T00:06:24.028680Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: a0933358-f198-4fb9-9473-e596721866b3 eTag: 12477286692984587971 source: Fabric] Jan 14 00:06:24.037326 waagent[2182]: 2026-01-14T00:06:24.037292Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 14 00:06:24.042253 waagent[2182]: 2026-01-14T00:06:24.042221Z INFO Daemon Jan 14 00:06:24.044707 waagent[2182]: 2026-01-14T00:06:24.044679Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 14 00:06:24.052599 waagent[2182]: 2026-01-14T00:06:24.052569Z INFO Daemon Daemon Downloading artifacts profile blob Jan 14 00:06:24.112027 waagent[2182]: 2026-01-14T00:06:24.111366Z INFO Daemon Downloaded certificate {'thumbprint': '6161878275D7ECE409471864F6B6A440DB0A0DD2', 'hasPrivateKey': True} Jan 14 00:06:24.118828 waagent[2182]: 2026-01-14T00:06:24.118791Z INFO Daemon Fetch goal state completed Jan 14 00:06:24.127927 waagent[2182]: 2026-01-14T00:06:24.127876Z INFO Daemon Daemon Starting provisioning Jan 14 00:06:24.131899 waagent[2182]: 2026-01-14T00:06:24.131860Z INFO Daemon Daemon Handle ovf-env.xml. Jan 14 00:06:24.135726 waagent[2182]: 2026-01-14T00:06:24.135697Z INFO Daemon Daemon Set hostname [ci-4547.0.0-n-16ff4e9fd7] Jan 14 00:06:24.141771 waagent[2182]: 2026-01-14T00:06:24.141704Z INFO Daemon Daemon Publish hostname [ci-4547.0.0-n-16ff4e9fd7] Jan 14 00:06:24.146521 waagent[2182]: 2026-01-14T00:06:24.146483Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 14 00:06:24.151652 waagent[2182]: 2026-01-14T00:06:24.151615Z INFO Daemon Daemon Primary interface is [eth0] Jan 14 00:06:24.162155 systemd-networkd[1621]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:06:24.162162 systemd-networkd[1621]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Jan 14 00:06:24.162240 systemd-networkd[1621]: eth0: DHCP lease lost Jan 14 00:06:24.183704 waagent[2182]: 2026-01-14T00:06:24.183659Z INFO Daemon Daemon Create user account if not exists Jan 14 00:06:24.188125 waagent[2182]: 2026-01-14T00:06:24.188085Z INFO Daemon Daemon User core already exists, skip useradd Jan 14 00:06:24.192596 waagent[2182]: 2026-01-14T00:06:24.192566Z INFO Daemon Daemon Configure sudoer Jan 14 00:06:24.193032 systemd-networkd[1621]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 14 00:06:24.200311 waagent[2182]: 2026-01-14T00:06:24.200267Z INFO Daemon Daemon Configure sshd Jan 14 00:06:24.206630 waagent[2182]: 2026-01-14T00:06:24.206593Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 14 00:06:24.216087 waagent[2182]: 2026-01-14T00:06:24.216057Z INFO Daemon Daemon Deploy ssh public key. Jan 14 00:06:25.276880 waagent[2182]: 2026-01-14T00:06:25.276813Z INFO Daemon Daemon Provisioning complete Jan 14 00:06:25.288775 waagent[2182]: 2026-01-14T00:06:25.288738Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 14 00:06:25.293638 waagent[2182]: 2026-01-14T00:06:25.293606Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 14 00:06:25.301585 waagent[2182]: 2026-01-14T00:06:25.301556Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 14 00:06:25.402030 waagent[2250]: 2026-01-14T00:06:25.401787Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 14 00:06:25.402030 waagent[2250]: 2026-01-14T00:06:25.401924Z INFO ExtHandler ExtHandler OS: flatcar 4547.0.0 Jan 14 00:06:25.402030 waagent[2250]: 2026-01-14T00:06:25.401965Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 14 00:06:25.403034 waagent[2250]: 2026-01-14T00:06:25.402453Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jan 14 00:06:25.415867 waagent[2250]: 2026-01-14T00:06:25.415806Z INFO ExtHandler ExtHandler Distro: flatcar-4547.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 14 00:06:25.416180 waagent[2250]: 2026-01-14T00:06:25.416148Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 00:06:25.416304 waagent[2250]: 2026-01-14T00:06:25.416279Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 00:06:25.421747 waagent[2250]: 2026-01-14T00:06:25.421698Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 00:06:25.426276 waagent[2250]: 2026-01-14T00:06:25.426242Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 14 00:06:25.426714 waagent[2250]: 2026-01-14T00:06:25.426682Z INFO ExtHandler Jan 14 00:06:25.426869 waagent[2250]: 2026-01-14T00:06:25.426843Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 6d8c83a4-bf2d-4fdf-9672-ed91fe818fd3 eTag: 12477286692984587971 source: Fabric] Jan 14 00:06:25.427222 waagent[2250]: 2026-01-14T00:06:25.427189Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 14 00:06:25.427729 waagent[2250]: 2026-01-14T00:06:25.427697Z INFO ExtHandler Jan 14 00:06:25.427843 waagent[2250]: 2026-01-14T00:06:25.427819Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 14 00:06:25.430427 waagent[2250]: 2026-01-14T00:06:25.430399Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 14 00:06:25.481060 waagent[2250]: 2026-01-14T00:06:25.480975Z INFO ExtHandler Downloaded certificate {'thumbprint': '6161878275D7ECE409471864F6B6A440DB0A0DD2', 'hasPrivateKey': True} Jan 14 00:06:25.481617 waagent[2250]: 2026-01-14T00:06:25.481582Z INFO ExtHandler Fetch goal state completed Jan 14 00:06:25.492758 waagent[2250]: 2026-01-14T00:06:25.492724Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.4 30 Sep 2025 (Library: OpenSSL 3.5.4 30 Sep 2025) Jan 14 00:06:25.496214 waagent[2250]: 2026-01-14T00:06:25.496177Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2250 Jan 14 00:06:25.496397 waagent[2250]: 2026-01-14T00:06:25.496369Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 14 00:06:25.496716 waagent[2250]: 2026-01-14T00:06:25.496685Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 14 00:06:25.497880 waagent[2250]: 2026-01-14T00:06:25.497842Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4547.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 14 00:06:25.498343 waagent[2250]: 2026-01-14T00:06:25.498304Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4547.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 14 00:06:25.498542 waagent[2250]: 2026-01-14T00:06:25.498512Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 14 00:06:25.499104 waagent[2250]: 2026-01-14T00:06:25.499075Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 14 00:06:25.510222 waagent[2250]: 2026-01-14T00:06:25.510196Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 14 00:06:25.510433 waagent[2250]: 2026-01-14T00:06:25.510400Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 14 00:06:25.515439 waagent[2250]: 2026-01-14T00:06:25.515417Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 14 00:06:25.520213 systemd[1]: Reload requested from client PID 2265 ('systemctl') (unit waagent.service)... Jan 14 00:06:25.520229 systemd[1]: Reloading... Jan 14 00:06:25.597029 zram_generator::config[2316]: No configuration found. Jan 14 00:06:25.742308 systemd[1]: Reloading finished in 221 ms. Jan 14 00:06:25.769025 waagent[2250]: 2026-01-14T00:06:25.768645Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 14 00:06:25.769025 waagent[2250]: 2026-01-14T00:06:25.768783Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 14 00:06:25.842392 waagent[2250]: 2026-01-14T00:06:25.842332Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 14 00:06:25.842790 waagent[2250]: 2026-01-14T00:06:25.842756Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 14 00:06:25.843533 waagent[2250]: 2026-01-14T00:06:25.843479Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 14 00:06:25.843625 waagent[2250]: 2026-01-14T00:06:25.843590Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 00:06:25.843688 waagent[2250]: 2026-01-14T00:06:25.843667Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 00:06:25.843916 waagent[2250]: 2026-01-14T00:06:25.843827Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 14 00:06:25.844216 waagent[2250]: 2026-01-14T00:06:25.844176Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 14 00:06:25.844339 waagent[2250]: 2026-01-14T00:06:25.844307Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 14 00:06:25.844339 waagent[2250]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 14 00:06:25.844339 waagent[2250]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 14 00:06:25.844339 waagent[2250]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 14 00:06:25.844339 waagent[2250]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 14 00:06:25.844339 waagent[2250]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 00:06:25.844339 waagent[2250]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 00:06:25.844740 waagent[2250]: 2026-01-14T00:06:25.844698Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 14 00:06:25.844817 waagent[2250]: 2026-01-14T00:06:25.844785Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 00:06:25.844870 waagent[2250]: 2026-01-14T00:06:25.844844Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 00:06:25.845151 waagent[2250]: 2026-01-14T00:06:25.844934Z INFO EnvHandler ExtHandler Configure routes Jan 14 00:06:25.845151 waagent[2250]: 2026-01-14T00:06:25.844980Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 14 00:06:25.845371 waagent[2250]: 2026-01-14T00:06:25.845340Z INFO EnvHandler ExtHandler Gateway:None Jan 14 00:06:25.845580 waagent[2250]: 2026-01-14T00:06:25.845543Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 14 00:06:25.845618 waagent[2250]: 2026-01-14T00:06:25.845606Z INFO EnvHandler ExtHandler Routes:None Jan 14 00:06:25.845750 waagent[2250]: 2026-01-14T00:06:25.845719Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 14 00:06:25.846037 waagent[2250]: 2026-01-14T00:06:25.846009Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 14 00:06:25.852397 waagent[2250]: 2026-01-14T00:06:25.852366Z INFO ExtHandler ExtHandler Jan 14 00:06:25.852515 waagent[2250]: 2026-01-14T00:06:25.852495Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 00bbc33f-8c92-4248-b635-c745018d1cca correlation a8f35f60-99ab-447a-9c83-37934655725b created: 2026-01-14T00:05:39.152489Z] Jan 14 00:06:25.852835 waagent[2250]: 2026-01-14T00:06:25.852806Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 14 00:06:25.853346 waagent[2250]: 2026-01-14T00:06:25.853315Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jan 14 00:06:25.861600 waagent[2250]: 2026-01-14T00:06:25.861550Z INFO MonitorHandler ExtHandler Network interfaces: Jan 14 00:06:25.861600 waagent[2250]: Executing ['ip', '-a', '-o', 'link']: Jan 14 00:06:25.861600 waagent[2250]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 14 00:06:25.861600 waagent[2250]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:d0:52:41 brd ff:ff:ff:ff:ff:ff\ altname enx7ced8dd05241 Jan 14 00:06:25.861600 waagent[2250]: 3: enP25661s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:d0:52:41 brd ff:ff:ff:ff:ff:ff\ altname enP25661p0s2 Jan 14 00:06:25.861600 waagent[2250]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 14 00:06:25.861600 waagent[2250]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 14 00:06:25.861600 waagent[2250]: 2: eth0 inet 10.200.20.18/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 14 00:06:25.861600 waagent[2250]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 14 00:06:25.861600 waagent[2250]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 14 00:06:25.861600 waagent[2250]: 2: eth0 inet6 fe80::7eed:8dff:fed0:5241/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 00:06:25.881932 waagent[2250]: 2026-01-14T00:06:25.881864Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 14 00:06:25.881932 waagent[2250]: Try `iptables -h' or 'iptables --help' for more information.) Jan 14 00:06:25.882646 waagent[2250]: 2026-01-14T00:06:25.882615Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 086EFF2B-5D91-40C6-A42A-90ED53357202;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 14 00:06:25.893961 waagent[2250]: 2026-01-14T00:06:25.893906Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 14 00:06:25.893961 waagent[2250]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 00:06:25.893961 waagent[2250]: pkts bytes target prot opt in out source destination Jan 14 00:06:25.893961 waagent[2250]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 00:06:25.893961 waagent[2250]: pkts bytes target prot opt in out source destination Jan 14 00:06:25.893961 waagent[2250]: Chain OUTPUT (policy ACCEPT 2 packets, 104 bytes) Jan 14 00:06:25.893961 waagent[2250]: pkts bytes target prot opt in out source destination Jan 14 00:06:25.893961 waagent[2250]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 00:06:25.893961 waagent[2250]: 2 112 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 00:06:25.893961 waagent[2250]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 00:06:25.897185 waagent[2250]: 2026-01-14T00:06:25.897140Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 14 00:06:25.897185 waagent[2250]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 00:06:25.897185 waagent[2250]: pkts bytes target prot opt in out source destination Jan 14 00:06:25.897185 waagent[2250]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 00:06:25.897185 waagent[2250]: pkts bytes target prot opt in out source destination Jan 14 00:06:25.897185 waagent[2250]: Chain OUTPUT (policy ACCEPT 2 packets, 104 bytes) Jan 14 00:06:25.897185 waagent[2250]: pkts bytes target prot opt in out source destination Jan 14 00:06:25.897185 waagent[2250]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 00:06:25.897185 waagent[2250]: 5 647 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 00:06:25.897185 waagent[2250]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 00:06:25.897362 waagent[2250]: 2026-01-14T00:06:25.897339Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 14 00:06:31.715322 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 00:06:31.717012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:06:31.833332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:06:31.843309 (kubelet)[2402]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:06:31.934582 kubelet[2402]: E0114 00:06:31.934516 2402 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:06:31.937228 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:06:31.937469 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:06:31.938116 systemd[1]: kubelet.service: Consumed 116ms CPU time, 107.2M memory peak. Jan 14 00:06:41.964617 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 00:06:41.966002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:06:42.101067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:06:42.104027 (kubelet)[2416]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:06:42.127123 kubelet[2416]: E0114 00:06:42.127070 2416 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:06:42.129115 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:06:42.129230 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:06:42.129792 systemd[1]: kubelet.service: Consumed 103ms CPU time, 107M memory peak. Jan 14 00:06:44.337779 chronyd[1969]: Selected source PHC0 Jan 14 00:06:52.214246 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 00:06:52.215625 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:06:52.323099 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:06:52.326094 (kubelet)[2431]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:06:52.458349 kubelet[2431]: E0114 00:06:52.458295 2431 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:06:52.460436 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:06:52.460551 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:06:52.462083 systemd[1]: kubelet.service: Consumed 105ms CPU time, 104.8M memory peak. Jan 14 00:06:55.261604 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 00:06:55.262725 systemd[1]: Started sshd@0-10.200.20.18:22-10.200.16.10:41686.service - OpenSSH per-connection server daemon (10.200.16.10:41686). Jan 14 00:06:55.722753 sshd[2439]: Accepted publickey for core from 10.200.16.10 port 41686 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:06:55.723809 sshd-session[2439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:55.728062 systemd-logind[1997]: New session 4 of user core. Jan 14 00:06:55.734140 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 00:06:56.044100 systemd[1]: Started sshd@1-10.200.20.18:22-10.200.16.10:41696.service - OpenSSH per-connection server daemon (10.200.16.10:41696). Jan 14 00:06:56.464770 sshd[2446]: Accepted publickey for core from 10.200.16.10 port 41696 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:06:56.465884 sshd-session[2446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:56.469625 systemd-logind[1997]: New session 5 of user core. Jan 14 00:06:56.477177 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 00:06:56.699037 sshd[2450]: Connection closed by 10.200.16.10 port 41696 Jan 14 00:06:56.698777 sshd-session[2446]: pam_unix(sshd:session): session closed for user core Jan 14 00:06:56.702137 systemd[1]: sshd@1-10.200.20.18:22-10.200.16.10:41696.service: Deactivated successfully. Jan 14 00:06:56.703780 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 00:06:56.705147 systemd-logind[1997]: Session 5 logged out. Waiting for processes to exit. Jan 14 00:06:56.706587 systemd-logind[1997]: Removed session 5. Jan 14 00:06:56.796777 systemd[1]: Started sshd@2-10.200.20.18:22-10.200.16.10:41708.service - OpenSSH per-connection server daemon (10.200.16.10:41708). Jan 14 00:06:57.224251 sshd[2456]: Accepted publickey for core from 10.200.16.10 port 41708 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:06:57.225329 sshd-session[2456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:57.228984 systemd-logind[1997]: New session 6 of user core. Jan 14 00:06:57.239182 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 00:06:57.453827 sshd[2460]: Connection closed by 10.200.16.10 port 41708 Jan 14 00:06:57.453587 sshd-session[2456]: pam_unix(sshd:session): session closed for user core Jan 14 00:06:57.458122 systemd[1]: sshd@2-10.200.20.18:22-10.200.16.10:41708.service: Deactivated successfully. Jan 14 00:06:57.459810 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 00:06:57.460708 systemd-logind[1997]: Session 6 logged out. Waiting for processes to exit. Jan 14 00:06:57.462421 systemd-logind[1997]: Removed session 6. Jan 14 00:06:57.545353 systemd[1]: Started sshd@3-10.200.20.18:22-10.200.16.10:41720.service - OpenSSH per-connection server daemon (10.200.16.10:41720). Jan 14 00:06:57.969873 sshd[2466]: Accepted publickey for core from 10.200.16.10 port 41720 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:06:57.970939 sshd-session[2466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:57.974543 systemd-logind[1997]: New session 7 of user core. Jan 14 00:06:57.988311 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 00:06:58.205334 sshd[2470]: Connection closed by 10.200.16.10 port 41720 Jan 14 00:06:58.204672 sshd-session[2466]: pam_unix(sshd:session): session closed for user core Jan 14 00:06:58.207950 systemd[1]: sshd@3-10.200.20.18:22-10.200.16.10:41720.service: Deactivated successfully. Jan 14 00:06:58.209358 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 00:06:58.211104 systemd-logind[1997]: Session 7 logged out. Waiting for processes to exit. Jan 14 00:06:58.211943 systemd-logind[1997]: Removed session 7. Jan 14 00:06:58.290480 systemd[1]: Started sshd@4-10.200.20.18:22-10.200.16.10:41726.service - OpenSSH per-connection server daemon (10.200.16.10:41726). Jan 14 00:06:58.380912 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 14 00:06:58.684811 sshd[2476]: Accepted publickey for core from 10.200.16.10 port 41726 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:06:58.685667 sshd-session[2476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:58.689361 systemd-logind[1997]: New session 8 of user core. Jan 14 00:06:58.697346 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 00:06:58.859084 sudo[2481]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 00:06:58.859653 sudo[2481]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 00:06:58.871412 sudo[2481]: pam_unix(sudo:session): session closed for user root Jan 14 00:06:58.943451 sshd[2480]: Connection closed by 10.200.16.10 port 41726 Jan 14 00:06:58.943228 sshd-session[2476]: pam_unix(sshd:session): session closed for user core Jan 14 00:06:58.948076 systemd[1]: sshd@4-10.200.20.18:22-10.200.16.10:41726.service: Deactivated successfully. Jan 14 00:06:58.949897 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 00:06:58.950858 systemd-logind[1997]: Session 8 logged out. Waiting for processes to exit. Jan 14 00:06:58.952841 systemd-logind[1997]: Removed session 8. Jan 14 00:06:59.036460 systemd[1]: Started sshd@5-10.200.20.18:22-10.200.16.10:41730.service - OpenSSH per-connection server daemon (10.200.16.10:41730). Jan 14 00:06:59.459824 sshd[2488]: Accepted publickey for core from 10.200.16.10 port 41730 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:06:59.460931 sshd-session[2488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:59.464882 systemd-logind[1997]: New session 9 of user core. Jan 14 00:06:59.471156 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 00:06:59.618519 sudo[2494]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 00:06:59.618726 sudo[2494]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 00:06:59.631347 sudo[2494]: pam_unix(sudo:session): session closed for user root Jan 14 00:06:59.636083 sudo[2493]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 00:06:59.636278 sudo[2493]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 00:06:59.641488 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 00:06:59.669000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 00:06:59.674171 kernel: kauditd_printk_skb: 80 callbacks suppressed Jan 14 00:06:59.674213 kernel: audit: type=1305 audit(1768349219.669:249): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 00:06:59.675171 augenrules[2518]: No rules Jan 14 00:06:59.702393 kernel: audit: type=1300 audit(1768349219.669:249): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe7a21e60 a2=420 a3=0 items=0 ppid=2499 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:59.669000 audit[2518]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe7a21e60 a2=420 a3=0 items=0 ppid=2499 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:59.683592 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 00:06:59.683795 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 00:06:59.669000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 00:06:59.711129 kernel: audit: type=1327 audit(1768349219.669:249): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 00:06:59.704285 sudo[2493]: pam_unix(sudo:session): session closed for user root Jan 14 00:06:59.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:59.725353 kernel: audit: type=1130 audit(1768349219.682:250): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:59.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:59.738377 kernel: audit: type=1131 audit(1768349219.682:251): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:59.702000 audit[2493]: USER_END pid=2493 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:06:59.752637 kernel: audit: type=1106 audit(1768349219.702:252): pid=2493 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:06:59.702000 audit[2493]: CRED_DISP pid=2493 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:06:59.765662 kernel: audit: type=1104 audit(1768349219.702:253): pid=2493 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:06:59.781079 sshd[2492]: Connection closed by 10.200.16.10 port 41730 Jan 14 00:06:59.781400 sshd-session[2488]: pam_unix(sshd:session): session closed for user core Jan 14 00:06:59.781000 audit[2488]: USER_END pid=2488 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:59.781000 audit[2488]: CRED_DISP pid=2488 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:59.806429 systemd-logind[1997]: Session 9 logged out. Waiting for processes to exit. Jan 14 00:06:59.808285 systemd[1]: sshd@5-10.200.20.18:22-10.200.16.10:41730.service: Deactivated successfully. Jan 14 00:06:59.810669 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 00:06:59.813103 systemd-logind[1997]: Removed session 9. Jan 14 00:06:59.820154 kernel: audit: type=1106 audit(1768349219.781:254): pid=2488 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:59.820230 kernel: audit: type=1104 audit(1768349219.781:255): pid=2488 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:59.820259 kernel: audit: type=1131 audit(1768349219.807:256): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.18:22-10.200.16.10:41730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:59.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.18:22-10.200.16.10:41730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:59.864438 systemd[1]: Started sshd@6-10.200.20.18:22-10.200.16.10:40650.service - OpenSSH per-connection server daemon (10.200.16.10:40650). Jan 14 00:06:59.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.18:22-10.200.16.10:40650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:07:00.254000 audit[2527]: USER_ACCT pid=2527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:07:00.255250 sshd[2527]: Accepted publickey for core from 10.200.16.10 port 40650 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:07:00.255000 audit[2527]: CRED_ACQ pid=2527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:07:00.255000 audit[2527]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcf66d3c0 a2=3 a3=0 items=0 ppid=1 pid=2527 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:00.255000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:07:00.256668 sshd-session[2527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:07:00.260448 systemd-logind[1997]: New session 10 of user core. Jan 14 00:07:00.279354 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 00:07:00.281000 audit[2527]: USER_START pid=2527 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:07:00.282000 audit[2531]: CRED_ACQ pid=2531 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:07:00.401000 audit[2532]: USER_ACCT pid=2532 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:07:00.402438 sudo[2532]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 00:07:00.401000 audit[2532]: CRED_REFR pid=2532 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:07:00.403048 sudo[2532]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 00:07:00.402000 audit[2532]: USER_START pid=2532 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:07:00.699891 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 00:07:00.712246 (dockerd)[2550]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 00:07:00.990338 dockerd[2550]: time="2026-01-14T00:07:00.990280598Z" level=info msg="Starting up" Jan 14 00:07:00.992016 dockerd[2550]: time="2026-01-14T00:07:00.991881022Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 00:07:01.000297 dockerd[2550]: time="2026-01-14T00:07:01.000264655Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 00:07:01.121105 dockerd[2550]: time="2026-01-14T00:07:01.121059270Z" level=info msg="Loading containers: start." Jan 14 00:07:01.134015 kernel: Initializing XFRM netlink socket Jan 14 00:07:01.166000 audit[2597]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.166000 audit[2597]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffee10ad20 a2=0 a3=0 items=0 ppid=2550 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.166000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 00:07:01.167000 audit[2599]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.167000 audit[2599]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe8763180 a2=0 a3=0 items=0 ppid=2550 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.167000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 00:07:01.169000 audit[2601]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.169000 audit[2601]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd105b700 a2=0 a3=0 items=0 ppid=2550 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.169000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 00:07:01.171000 audit[2603]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2603 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.171000 audit[2603]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd17d3120 a2=0 a3=0 items=0 ppid=2550 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.171000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 00:07:01.173000 audit[2605]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.173000 audit[2605]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc0309950 a2=0 a3=0 items=0 ppid=2550 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.173000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 00:07:01.174000 audit[2607]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.174000 audit[2607]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffe83a6e0 a2=0 a3=0 items=0 ppid=2550 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.174000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 00:07:01.176000 audit[2609]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2609 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.176000 audit[2609]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc2996850 a2=0 a3=0 items=0 ppid=2550 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.176000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 00:07:01.178000 audit[2611]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=2611 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.178000 audit[2611]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffc0822500 a2=0 a3=0 items=0 ppid=2550 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.178000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 00:07:01.196000 audit[2614]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2614 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.196000 audit[2614]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffef6e5180 a2=0 a3=0 items=0 ppid=2550 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.196000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 00:07:01.197000 audit[2616]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=2616 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.197000 audit[2616]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd97e9650 a2=0 a3=0 items=0 ppid=2550 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.197000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 00:07:01.199000 audit[2618]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2618 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.199000 audit[2618]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffff2f5af90 a2=0 a3=0 items=0 ppid=2550 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.199000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 00:07:01.201000 audit[2620]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2620 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.201000 audit[2620]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc3aa48f0 a2=0 a3=0 items=0 ppid=2550 pid=2620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.201000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 00:07:01.202000 audit[2622]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=2622 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.202000 audit[2622]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffc6ccdc60 a2=0 a3=0 items=0 ppid=2550 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.202000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 00:07:01.238000 audit[2652]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=2652 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:01.238000 audit[2652]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff1f02220 a2=0 a3=0 items=0 ppid=2550 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.238000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 00:07:01.240000 audit[2654]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=2654 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:01.240000 audit[2654]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd58fc430 a2=0 a3=0 items=0 ppid=2550 pid=2654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.240000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 00:07:01.242000 audit[2656]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2656 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:01.242000 audit[2656]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff63ef7f0 a2=0 a3=0 items=0 ppid=2550 pid=2656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.242000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 00:07:01.243000 audit[2658]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2658 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:01.243000 audit[2658]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc81be650 a2=0 a3=0 items=0 ppid=2550 pid=2658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.243000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 00:07:01.245000 audit[2660]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=2660 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:01.245000 audit[2660]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff4de0c60 a2=0 a3=0 items=0 ppid=2550 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.245000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 00:07:01.246000 audit[2662]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=2662 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:01.246000 audit[2662]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc1ad3960 a2=0 a3=0 items=0 ppid=2550 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.246000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 00:07:01.248000 audit[2664]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=2664 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:01.248000 audit[2664]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffff55fc40 a2=0 a3=0 items=0 ppid=2550 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.248000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 00:07:01.250000 audit[2666]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=2666 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:01.250000 audit[2666]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffd02b7f60 a2=0 a3=0 items=0 ppid=2550 pid=2666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.250000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 00:07:01.252000 audit[2668]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=2668 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:01.252000 audit[2668]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=fffff7ced0e0 a2=0 a3=0 items=0 ppid=2550 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.252000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 00:07:01.254000 audit[2670]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=2670 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:01.254000 audit[2670]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe2b0a400 a2=0 a3=0 items=0 ppid=2550 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.254000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 00:07:01.256000 audit[2672]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=2672 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:01.256000 audit[2672]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffd300dbc0 a2=0 a3=0 items=0 ppid=2550 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.256000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 00:07:01.257000 audit[2674]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=2674 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:01.257000 audit[2674]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffdbe26fc0 a2=0 a3=0 items=0 ppid=2550 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.257000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 00:07:01.259000 audit[2676]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=2676 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:01.259000 audit[2676]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffd9f27ec0 a2=0 a3=0 items=0 ppid=2550 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.259000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 00:07:01.263000 audit[2681]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=2681 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.263000 audit[2681]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc89ffe20 a2=0 a3=0 items=0 ppid=2550 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.263000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 00:07:01.265000 audit[2683]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2683 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.265000 audit[2683]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd9421100 a2=0 a3=0 items=0 ppid=2550 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.265000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 00:07:01.267000 audit[2685]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2685 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.267000 audit[2685]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffcb2d4440 a2=0 a3=0 items=0 ppid=2550 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.267000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 00:07:01.269000 audit[2687]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2687 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:01.269000 audit[2687]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffced352a0 a2=0 a3=0 items=0 ppid=2550 pid=2687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.269000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 00:07:01.271000 audit[2689]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=2689 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:01.271000 audit[2689]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffffef551a0 a2=0 a3=0 items=0 ppid=2550 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.271000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 00:07:01.273000 audit[2691]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=2691 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:01.273000 audit[2691]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff66f3a80 a2=0 a3=0 items=0 ppid=2550 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.273000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 00:07:01.308000 audit[2698]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=2698 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.308000 audit[2698]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffd9d15700 a2=0 a3=0 items=0 ppid=2550 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.308000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 00:07:01.309000 audit[2700]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=2700 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.309000 audit[2700]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffcfa645a0 a2=0 a3=0 items=0 ppid=2550 pid=2700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.309000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 00:07:01.315000 audit[2708]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2708 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.315000 audit[2708]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=fffff2f76e60 a2=0 a3=0 items=0 ppid=2550 pid=2708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.315000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 00:07:01.319000 audit[2713]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2713 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.319000 audit[2713]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd4dfffd0 a2=0 a3=0 items=0 ppid=2550 pid=2713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.319000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 00:07:01.321000 audit[2715]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2715 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.321000 audit[2715]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffc1a9b340 a2=0 a3=0 items=0 ppid=2550 pid=2715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.321000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 00:07:01.323000 audit[2717]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2717 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.323000 audit[2717]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffcddb5c60 a2=0 a3=0 items=0 ppid=2550 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.323000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 00:07:01.324000 audit[2719]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=2719 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.324000 audit[2719]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffe4c11fa0 a2=0 a3=0 items=0 ppid=2550 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.324000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 00:07:01.326000 audit[2721]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=2721 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:01.326000 audit[2721]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc34181c0 a2=0 a3=0 items=0 ppid=2550 pid=2721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:01.326000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 00:07:01.327802 systemd-networkd[1621]: docker0: Link UP Jan 14 00:07:01.345509 dockerd[2550]: time="2026-01-14T00:07:01.345413828Z" level=info msg="Loading containers: done." Jan 14 00:07:01.356197 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3233587037-merged.mount: Deactivated successfully. Jan 14 00:07:01.397651 dockerd[2550]: time="2026-01-14T00:07:01.397594934Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 00:07:01.397830 dockerd[2550]: time="2026-01-14T00:07:01.397693210Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 00:07:01.397830 dockerd[2550]: time="2026-01-14T00:07:01.397800887Z" level=info msg="Initializing buildkit" Jan 14 00:07:01.447687 dockerd[2550]: time="2026-01-14T00:07:01.447631271Z" level=info msg="Completed buildkit initialization" Jan 14 00:07:01.451293 dockerd[2550]: time="2026-01-14T00:07:01.451238793Z" level=info msg="Daemon has completed initialization" Jan 14 00:07:01.452032 dockerd[2550]: time="2026-01-14T00:07:01.451423186Z" level=info msg="API listen on /run/docker.sock" Jan 14 00:07:01.452302 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 00:07:01.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:07:02.113809 containerd[2043]: time="2026-01-14T00:07:02.113767629Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 14 00:07:02.464354 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 00:07:02.466175 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:07:02.582271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:07:02.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:07:02.587428 (kubelet)[2767]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:07:02.706530 kubelet[2767]: E0114 00:07:02.706455 2767 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:07:02.708286 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:07:02.708403 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:07:02.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 00:07:02.708973 systemd[1]: kubelet.service: Consumed 108ms CPU time, 107M memory peak. Jan 14 00:07:03.523706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4284182695.mount: Deactivated successfully. Jan 14 00:07:04.390046 containerd[2043]: time="2026-01-14T00:07:04.389522961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:04.392499 containerd[2043]: time="2026-01-14T00:07:04.392450124Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=22977756" Jan 14 00:07:04.396080 containerd[2043]: time="2026-01-14T00:07:04.396036397Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:04.400127 containerd[2043]: time="2026-01-14T00:07:04.400091764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:04.401022 containerd[2043]: time="2026-01-14T00:07:04.400782067Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 2.286977972s" Jan 14 00:07:04.401022 containerd[2043]: time="2026-01-14T00:07:04.400818036Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Jan 14 00:07:04.401607 containerd[2043]: time="2026-01-14T00:07:04.401549325Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 14 00:07:05.748026 containerd[2043]: time="2026-01-14T00:07:05.747576957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:05.755755 containerd[2043]: time="2026-01-14T00:07:05.755706083Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19127323" Jan 14 00:07:05.759056 containerd[2043]: time="2026-01-14T00:07:05.758984982Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:05.764023 containerd[2043]: time="2026-01-14T00:07:05.763836752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:05.764828 containerd[2043]: time="2026-01-14T00:07:05.764806804Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.363232126s" Jan 14 00:07:05.764923 containerd[2043]: time="2026-01-14T00:07:05.764911529Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Jan 14 00:07:05.765448 containerd[2043]: time="2026-01-14T00:07:05.765421768Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 14 00:07:06.834532 containerd[2043]: time="2026-01-14T00:07:06.834472836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:06.839391 containerd[2043]: time="2026-01-14T00:07:06.839209457Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=0" Jan 14 00:07:06.842478 containerd[2043]: time="2026-01-14T00:07:06.842451635Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:06.847235 containerd[2043]: time="2026-01-14T00:07:06.847147574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:06.847986 containerd[2043]: time="2026-01-14T00:07:06.847918969Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.082462487s" Jan 14 00:07:06.847986 containerd[2043]: time="2026-01-14T00:07:06.847947946Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Jan 14 00:07:06.848745 containerd[2043]: time="2026-01-14T00:07:06.848695324Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 14 00:07:07.192371 update_engine[2007]: I20260114 00:07:07.192078 2007 update_attempter.cc:509] Updating boot flags... Jan 14 00:07:08.265014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3383881525.mount: Deactivated successfully. Jan 14 00:07:08.487033 containerd[2043]: time="2026-01-14T00:07:08.486892684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:08.498022 containerd[2043]: time="2026-01-14T00:07:08.497927433Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22801532" Jan 14 00:07:08.501362 containerd[2043]: time="2026-01-14T00:07:08.501309134Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:08.505586 containerd[2043]: time="2026-01-14T00:07:08.505536910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:08.506085 containerd[2043]: time="2026-01-14T00:07:08.505825506Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.657105438s" Jan 14 00:07:08.506085 containerd[2043]: time="2026-01-14T00:07:08.505856116Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Jan 14 00:07:08.506373 containerd[2043]: time="2026-01-14T00:07:08.506292910Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 14 00:07:09.215837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1600446432.mount: Deactivated successfully. Jan 14 00:07:10.110740 containerd[2043]: time="2026-01-14T00:07:10.110075453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:10.113034 containerd[2043]: time="2026-01-14T00:07:10.112996351Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=19828106" Jan 14 00:07:10.116254 containerd[2043]: time="2026-01-14T00:07:10.116230638Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:10.122092 containerd[2043]: time="2026-01-14T00:07:10.122066713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:10.122497 containerd[2043]: time="2026-01-14T00:07:10.122467034Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.616146299s" Jan 14 00:07:10.122543 containerd[2043]: time="2026-01-14T00:07:10.122499115Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Jan 14 00:07:10.123466 containerd[2043]: time="2026-01-14T00:07:10.123448899Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 14 00:07:10.718264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2356269474.mount: Deactivated successfully. Jan 14 00:07:10.738724 containerd[2043]: time="2026-01-14T00:07:10.738668053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:10.741908 containerd[2043]: time="2026-01-14T00:07:10.741734157Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 14 00:07:10.748054 containerd[2043]: time="2026-01-14T00:07:10.748029339Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:10.752866 containerd[2043]: time="2026-01-14T00:07:10.752028186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:10.752866 containerd[2043]: time="2026-01-14T00:07:10.752387105Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 628.755303ms" Jan 14 00:07:10.752866 containerd[2043]: time="2026-01-14T00:07:10.752412978Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Jan 14 00:07:10.752977 containerd[2043]: time="2026-01-14T00:07:10.752922271Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 14 00:07:11.371562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2131135987.mount: Deactivated successfully. Jan 14 00:07:12.714224 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 14 00:07:12.717219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:07:12.828028 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 14 00:07:12.828131 kernel: audit: type=1130 audit(1768349232.823:309): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:07:12.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:07:12.824179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:07:12.847230 (kubelet)[3025]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:07:12.872458 kubelet[3025]: E0114 00:07:12.872398 3025 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:07:12.874435 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:07:12.874548 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:07:12.874923 systemd[1]: kubelet.service: Consumed 106ms CPU time, 107M memory peak. Jan 14 00:07:12.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 00:07:12.889012 kernel: audit: type=1131 audit(1768349232.873:310): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 00:07:14.889785 containerd[2043]: time="2026-01-14T00:07:14.889560648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:14.942341 containerd[2043]: time="2026-01-14T00:07:14.942253561Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=96314891" Jan 14 00:07:14.946480 containerd[2043]: time="2026-01-14T00:07:14.946344275Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:14.951021 containerd[2043]: time="2026-01-14T00:07:14.950625870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:14.951650 containerd[2043]: time="2026-01-14T00:07:14.951484314Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 4.198540041s" Jan 14 00:07:14.951650 containerd[2043]: time="2026-01-14T00:07:14.951535908Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Jan 14 00:07:17.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:07:17.906477 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:07:17.906826 systemd[1]: kubelet.service: Consumed 106ms CPU time, 107M memory peak. Jan 14 00:07:17.912223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:07:17.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:07:17.983231 kernel: audit: type=1130 audit(1768349237.905:311): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:07:17.983694 kernel: audit: type=1131 audit(1768349237.905:312): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:07:17.998292 systemd[1]: Reload requested from client PID 3065 ('systemctl') (unit session-10.scope)... Jan 14 00:07:17.998301 systemd[1]: Reloading... Jan 14 00:07:18.117021 zram_generator::config[3126]: No configuration found. Jan 14 00:07:18.269168 systemd[1]: Reloading finished in 270 ms. Jan 14 00:07:18.289000 audit: BPF prog-id=87 op=LOAD Jan 14 00:07:18.289000 audit: BPF prog-id=81 op=UNLOAD Jan 14 00:07:18.302548 kernel: audit: type=1334 audit(1768349238.289:313): prog-id=87 op=LOAD Jan 14 00:07:18.302613 kernel: audit: type=1334 audit(1768349238.289:314): prog-id=81 op=UNLOAD Jan 14 00:07:18.289000 audit: BPF prog-id=88 op=LOAD Jan 14 00:07:18.307017 kernel: audit: type=1334 audit(1768349238.289:315): prog-id=88 op=LOAD Jan 14 00:07:18.289000 audit: BPF prog-id=89 op=LOAD Jan 14 00:07:18.311466 kernel: audit: type=1334 audit(1768349238.289:316): prog-id=89 op=LOAD Jan 14 00:07:18.289000 audit: BPF prog-id=82 op=UNLOAD Jan 14 00:07:18.317576 kernel: audit: type=1334 audit(1768349238.289:317): prog-id=82 op=UNLOAD Jan 14 00:07:18.289000 audit: BPF prog-id=83 op=UNLOAD Jan 14 00:07:18.324465 kernel: audit: type=1334 audit(1768349238.289:318): prog-id=83 op=UNLOAD Jan 14 00:07:18.290000 audit: BPF prog-id=90 op=LOAD Jan 14 00:07:18.329111 kernel: audit: type=1334 audit(1768349238.290:319): prog-id=90 op=LOAD Jan 14 00:07:18.335321 kernel: audit: type=1334 audit(1768349238.290:320): prog-id=74 op=UNLOAD Jan 14 00:07:18.290000 audit: BPF prog-id=74 op=UNLOAD Jan 14 00:07:18.290000 audit: BPF prog-id=91 op=LOAD Jan 14 00:07:18.290000 audit: BPF prog-id=69 op=UNLOAD Jan 14 00:07:18.296000 audit: BPF prog-id=92 op=LOAD Jan 14 00:07:18.296000 audit: BPF prog-id=84 op=UNLOAD Jan 14 00:07:18.296000 audit: BPF prog-id=93 op=LOAD Jan 14 00:07:18.306000 audit: BPF prog-id=94 op=LOAD Jan 14 00:07:18.311000 audit: BPF prog-id=85 op=UNLOAD Jan 14 00:07:18.311000 audit: BPF prog-id=86 op=UNLOAD Jan 14 00:07:18.316000 audit: BPF prog-id=95 op=LOAD Jan 14 00:07:18.316000 audit: BPF prog-id=96 op=LOAD Jan 14 00:07:18.316000 audit: BPF prog-id=67 op=UNLOAD Jan 14 00:07:18.316000 audit: BPF prog-id=68 op=UNLOAD Jan 14 00:07:18.317000 audit: BPF prog-id=97 op=LOAD Jan 14 00:07:18.317000 audit: BPF prog-id=73 op=UNLOAD Jan 14 00:07:18.324000 audit: BPF prog-id=98 op=LOAD Jan 14 00:07:18.324000 audit: BPF prog-id=75 op=UNLOAD Jan 14 00:07:18.328000 audit: BPF prog-id=99 op=LOAD Jan 14 00:07:18.335000 audit: BPF prog-id=100 op=LOAD Jan 14 00:07:18.335000 audit: BPF prog-id=76 op=UNLOAD Jan 14 00:07:18.335000 audit: BPF prog-id=77 op=UNLOAD Jan 14 00:07:18.335000 audit: BPF prog-id=101 op=LOAD Jan 14 00:07:18.335000 audit: BPF prog-id=70 op=UNLOAD Jan 14 00:07:18.335000 audit: BPF prog-id=102 op=LOAD Jan 14 00:07:18.335000 audit: BPF prog-id=103 op=LOAD Jan 14 00:07:18.335000 audit: BPF prog-id=71 op=UNLOAD Jan 14 00:07:18.335000 audit: BPF prog-id=72 op=UNLOAD Jan 14 00:07:18.336000 audit: BPF prog-id=104 op=LOAD Jan 14 00:07:18.336000 audit: BPF prog-id=78 op=UNLOAD Jan 14 00:07:18.336000 audit: BPF prog-id=105 op=LOAD Jan 14 00:07:18.336000 audit: BPF prog-id=106 op=LOAD Jan 14 00:07:18.336000 audit: BPF prog-id=79 op=UNLOAD Jan 14 00:07:18.336000 audit: BPF prog-id=80 op=UNLOAD Jan 14 00:07:18.349468 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 00:07:18.349535 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 00:07:18.349874 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:07:18.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 00:07:18.349942 systemd[1]: kubelet.service: Consumed 79ms CPU time, 95.2M memory peak. Jan 14 00:07:18.351760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:07:18.572089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:07:18.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:07:18.576410 (kubelet)[3181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 00:07:18.605410 kubelet[3181]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 00:07:18.605750 kubelet[3181]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 00:07:18.687654 kubelet[3181]: I0114 00:07:18.687573 3181 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 00:07:19.040444 kubelet[3181]: I0114 00:07:19.040331 3181 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 14 00:07:19.040444 kubelet[3181]: I0114 00:07:19.040359 3181 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 00:07:19.041700 kubelet[3181]: I0114 00:07:19.041603 3181 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 14 00:07:19.041763 kubelet[3181]: I0114 00:07:19.041743 3181 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 00:07:19.042114 kubelet[3181]: I0114 00:07:19.042095 3181 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 00:07:19.197377 kubelet[3181]: E0114 00:07:19.197345 3181 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 00:07:19.197629 kubelet[3181]: I0114 00:07:19.197493 3181 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 00:07:19.200395 kubelet[3181]: I0114 00:07:19.200375 3181 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 00:07:19.203082 kubelet[3181]: I0114 00:07:19.203015 3181 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 14 00:07:19.203704 kubelet[3181]: I0114 00:07:19.203303 3181 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 00:07:19.203704 kubelet[3181]: I0114 00:07:19.203327 3181 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-n-16ff4e9fd7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 00:07:19.203704 kubelet[3181]: I0114 00:07:19.203435 3181 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 00:07:19.203704 kubelet[3181]: I0114 00:07:19.203441 3181 container_manager_linux.go:306] "Creating device plugin manager" Jan 14 00:07:19.203866 kubelet[3181]: I0114 00:07:19.203546 3181 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 14 00:07:19.209988 kubelet[3181]: I0114 00:07:19.209963 3181 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:07:19.211167 kubelet[3181]: I0114 00:07:19.211151 3181 kubelet.go:475] "Attempting to sync node with API server" Jan 14 00:07:19.211251 kubelet[3181]: I0114 00:07:19.211242 3181 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 00:07:19.211697 kubelet[3181]: E0114 00:07:19.211675 3181 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.0.0-n-16ff4e9fd7&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 00:07:19.213808 kubelet[3181]: I0114 00:07:19.212013 3181 kubelet.go:387] "Adding apiserver pod source" Jan 14 00:07:19.214589 kubelet[3181]: I0114 00:07:19.213897 3181 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 00:07:19.214589 kubelet[3181]: E0114 00:07:19.214561 3181 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 00:07:19.214762 kubelet[3181]: I0114 00:07:19.214746 3181 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 00:07:19.215208 kubelet[3181]: I0114 00:07:19.215192 3181 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 00:07:19.215287 kubelet[3181]: I0114 00:07:19.215278 3181 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 14 00:07:19.215366 kubelet[3181]: W0114 00:07:19.215357 3181 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 00:07:19.218267 kubelet[3181]: I0114 00:07:19.218252 3181 server.go:1262] "Started kubelet" Jan 14 00:07:19.220901 kubelet[3181]: I0114 00:07:19.218478 3181 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 00:07:19.220949 kubelet[3181]: I0114 00:07:19.220913 3181 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 14 00:07:19.221209 kubelet[3181]: I0114 00:07:19.221184 3181 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 00:07:19.221668 kubelet[3181]: I0114 00:07:19.221651 3181 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 00:07:19.224564 kubelet[3181]: E0114 00:07:19.223582 3181 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.18:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547.0.0-n-16ff4e9fd7.188a7043926f34cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.0.0-n-16ff4e9fd7,UID:ci-4547.0.0-n-16ff4e9fd7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.0.0-n-16ff4e9fd7,},FirstTimestamp:2026-01-14 00:07:19.218222285 +0000 UTC m=+0.639319749,LastTimestamp:2026-01-14 00:07:19.218222285 +0000 UTC m=+0.639319749,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.0.0-n-16ff4e9fd7,}" Jan 14 00:07:19.225644 kubelet[3181]: I0114 00:07:19.225607 3181 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 00:07:19.226255 kubelet[3181]: I0114 00:07:19.226230 3181 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 00:07:19.228965 kubelet[3181]: I0114 00:07:19.227125 3181 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 14 00:07:19.228965 kubelet[3181]: E0114 00:07:19.227250 3181 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547.0.0-n-16ff4e9fd7\" not found" Jan 14 00:07:19.228965 kubelet[3181]: I0114 00:07:19.227960 3181 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 14 00:07:19.228965 kubelet[3181]: I0114 00:07:19.228081 3181 reconciler.go:29] "Reconciler: start to sync state" Jan 14 00:07:19.228965 kubelet[3181]: I0114 00:07:19.228313 3181 server.go:310] "Adding debug handlers to kubelet server" Jan 14 00:07:19.228965 kubelet[3181]: E0114 00:07:19.228793 3181 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 00:07:19.229291 kubelet[3181]: E0114 00:07:19.228853 3181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-n-16ff4e9fd7?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="200ms" Jan 14 00:07:19.228000 audit[3197]: NETFILTER_CFG table=mangle:45 family=10 entries=2 op=nft_register_chain pid=3197 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:19.228000 audit[3197]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd58b9d80 a2=0 a3=0 items=0 ppid=3181 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:19.228000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 00:07:19.230257 kubelet[3181]: I0114 00:07:19.230235 3181 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 14 00:07:19.230579 kubelet[3181]: I0114 00:07:19.230538 3181 factory.go:223] Registration of the containerd container factory successfully Jan 14 00:07:19.230579 kubelet[3181]: I0114 00:07:19.230569 3181 factory.go:223] Registration of the systemd container factory successfully Jan 14 00:07:19.230649 kubelet[3181]: I0114 00:07:19.230628 3181 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 00:07:19.230000 audit[3198]: NETFILTER_CFG table=mangle:46 family=2 entries=2 op=nft_register_chain pid=3198 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:19.230000 audit[3198]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdbc47fa0 a2=0 a3=0 items=0 ppid=3181 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:19.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 00:07:19.230000 audit[3199]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3199 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:19.230000 audit[3199]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc2245a0 a2=0 a3=0 items=0 ppid=3181 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:19.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 00:07:19.232000 audit[3201]: NETFILTER_CFG table=mangle:48 family=10 entries=1 op=nft_register_chain pid=3201 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:19.232000 audit[3201]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd2417780 a2=0 a3=0 items=0 ppid=3181 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:19.232000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 00:07:19.234073 kubelet[3181]: E0114 00:07:19.233963 3181 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 00:07:19.233000 audit[3202]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=3202 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:19.233000 audit[3202]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc7d65420 a2=0 a3=0 items=0 ppid=3181 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:19.233000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 00:07:19.234000 audit[3203]: NETFILTER_CFG table=filter:50 family=10 entries=1 op=nft_register_chain pid=3203 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:19.234000 audit[3203]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffce3df9e0 a2=0 a3=0 items=0 ppid=3181 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:19.234000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 00:07:19.235000 audit[3204]: NETFILTER_CFG table=filter:51 family=2 entries=2 op=nft_register_chain pid=3204 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:19.235000 audit[3204]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffcf9800c0 a2=0 a3=0 items=0 ppid=3181 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:19.235000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 00:07:19.236000 audit[3206]: NETFILTER_CFG table=filter:52 family=2 entries=2 op=nft_register_chain pid=3206 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:19.236000 audit[3206]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffdb8cd930 a2=0 a3=0 items=0 ppid=3181 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:19.236000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 00:07:19.244000 audit[3210]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:19.244000 audit[3210]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff446f920 a2=0 a3=0 items=0 ppid=3181 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:19.244000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 14 00:07:19.245713 kubelet[3181]: I0114 00:07:19.245684 3181 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 14 00:07:19.245713 kubelet[3181]: I0114 00:07:19.245712 3181 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 14 00:07:19.245758 kubelet[3181]: I0114 00:07:19.245745 3181 kubelet.go:2427] "Starting kubelet main sync loop" Jan 14 00:07:19.245813 kubelet[3181]: E0114 00:07:19.245787 3181 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 00:07:19.245000 audit[3211]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:19.245000 audit[3211]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffed4b4520 a2=0 a3=0 items=0 ppid=3181 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:19.245000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 00:07:19.247400 kubelet[3181]: E0114 00:07:19.247315 3181 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 00:07:19.248000 audit[3214]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:19.248000 audit[3214]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe9759050 a2=0 a3=0 items=0 ppid=3181 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:19.248000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 00:07:19.249000 audit[3215]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3215 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:19.249000 audit[3215]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc26bec90 a2=0 a3=0 items=0 ppid=3181 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:19.249000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 00:07:19.252949 kubelet[3181]: I0114 00:07:19.252719 3181 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 00:07:19.252949 kubelet[3181]: I0114 00:07:19.252733 3181 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 00:07:19.252949 kubelet[3181]: I0114 00:07:19.252744 3181 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:07:19.258560 kubelet[3181]: I0114 00:07:19.258539 3181 policy_none.go:49] "None policy: Start" Jan 14 00:07:19.258657 kubelet[3181]: I0114 00:07:19.258648 3181 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 14 00:07:19.258701 kubelet[3181]: I0114 00:07:19.258693 3181 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 14 00:07:19.263710 kubelet[3181]: I0114 00:07:19.263689 3181 policy_none.go:47] "Start" Jan 14 00:07:19.267805 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 00:07:19.279929 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 00:07:19.282873 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 00:07:19.294886 kubelet[3181]: E0114 00:07:19.293954 3181 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 00:07:19.294886 kubelet[3181]: I0114 00:07:19.294191 3181 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 00:07:19.294886 kubelet[3181]: I0114 00:07:19.294202 3181 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 00:07:19.294886 kubelet[3181]: I0114 00:07:19.294619 3181 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 00:07:19.297146 kubelet[3181]: E0114 00:07:19.297130 3181 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 00:07:19.297347 kubelet[3181]: E0114 00:07:19.297261 3181 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547.0.0-n-16ff4e9fd7\" not found" Jan 14 00:07:19.358790 systemd[1]: Created slice kubepods-burstable-podc38a995f62e6deff0a33f968a81f46dc.slice - libcontainer container kubepods-burstable-podc38a995f62e6deff0a33f968a81f46dc.slice. Jan 14 00:07:19.368731 kubelet[3181]: E0114 00:07:19.368449 3181 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-n-16ff4e9fd7\" not found" node="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:19.372439 systemd[1]: Created slice kubepods-burstable-pod1f3e4183f7225ed705c47a879f6021c7.slice - libcontainer container kubepods-burstable-pod1f3e4183f7225ed705c47a879f6021c7.slice. Jan 14 00:07:19.382966 kubelet[3181]: E0114 00:07:19.382945 3181 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-n-16ff4e9fd7\" not found" node="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:19.384590 systemd[1]: Created slice kubepods-burstable-pod25a2b67991fe2bc42cd444081ade252e.slice - libcontainer container kubepods-burstable-pod25a2b67991fe2bc42cd444081ade252e.slice. Jan 14 00:07:19.385951 kubelet[3181]: E0114 00:07:19.385931 3181 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-n-16ff4e9fd7\" not found" node="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:19.396402 kubelet[3181]: I0114 00:07:19.396381 3181 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:19.396746 kubelet[3181]: E0114 00:07:19.396721 3181 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:19.429192 kubelet[3181]: I0114 00:07:19.429130 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c38a995f62e6deff0a33f968a81f46dc-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-n-16ff4e9fd7\" (UID: \"c38a995f62e6deff0a33f968a81f46dc\") " pod="kube-system/kube-apiserver-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:19.429361 kubelet[3181]: I0114 00:07:19.429268 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c38a995f62e6deff0a33f968a81f46dc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-n-16ff4e9fd7\" (UID: \"c38a995f62e6deff0a33f968a81f46dc\") " pod="kube-system/kube-apiserver-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:19.429361 kubelet[3181]: I0114 00:07:19.429288 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f3e4183f7225ed705c47a879f6021c7-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7\" (UID: \"1f3e4183f7225ed705c47a879f6021c7\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:19.429361 kubelet[3181]: I0114 00:07:19.429298 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1f3e4183f7225ed705c47a879f6021c7-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7\" (UID: \"1f3e4183f7225ed705c47a879f6021c7\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:19.429738 kubelet[3181]: E0114 00:07:19.429700 3181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-n-16ff4e9fd7?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="400ms" Jan 14 00:07:19.429859 kubelet[3181]: I0114 00:07:19.429800 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f3e4183f7225ed705c47a879f6021c7-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7\" (UID: \"1f3e4183f7225ed705c47a879f6021c7\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:19.429958 kubelet[3181]: I0114 00:07:19.429946 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f3e4183f7225ed705c47a879f6021c7-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7\" (UID: \"1f3e4183f7225ed705c47a879f6021c7\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:19.430069 kubelet[3181]: I0114 00:07:19.430057 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c38a995f62e6deff0a33f968a81f46dc-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-n-16ff4e9fd7\" (UID: \"c38a995f62e6deff0a33f968a81f46dc\") " pod="kube-system/kube-apiserver-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:19.430160 kubelet[3181]: I0114 00:07:19.430148 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f3e4183f7225ed705c47a879f6021c7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7\" (UID: \"1f3e4183f7225ed705c47a879f6021c7\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:19.430286 kubelet[3181]: I0114 00:07:19.430228 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/25a2b67991fe2bc42cd444081ade252e-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-n-16ff4e9fd7\" (UID: \"25a2b67991fe2bc42cd444081ade252e\") " pod="kube-system/kube-scheduler-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:19.599582 kubelet[3181]: I0114 00:07:19.599260 3181 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:19.599582 kubelet[3181]: E0114 00:07:19.599565 3181 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:19.675205 containerd[2043]: time="2026-01-14T00:07:19.675163346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-n-16ff4e9fd7,Uid:c38a995f62e6deff0a33f968a81f46dc,Namespace:kube-system,Attempt:0,}" Jan 14 00:07:19.690103 containerd[2043]: time="2026-01-14T00:07:19.689933123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7,Uid:1f3e4183f7225ed705c47a879f6021c7,Namespace:kube-system,Attempt:0,}" Jan 14 00:07:19.694739 containerd[2043]: time="2026-01-14T00:07:19.694714854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-n-16ff4e9fd7,Uid:25a2b67991fe2bc42cd444081ade252e,Namespace:kube-system,Attempt:0,}" Jan 14 00:07:19.830450 kubelet[3181]: E0114 00:07:19.830407 3181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-n-16ff4e9fd7?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="800ms" Jan 14 00:07:20.001927 kubelet[3181]: I0114 00:07:20.001653 3181 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:20.002178 kubelet[3181]: E0114 00:07:20.002156 3181 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:20.130089 kubelet[3181]: E0114 00:07:20.130044 3181 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 00:07:20.353869 kubelet[3181]: E0114 00:07:20.353744 3181 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 00:07:20.381250 kubelet[3181]: E0114 00:07:20.381214 3181 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.0.0-n-16ff4e9fd7&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 00:07:20.631656 kubelet[3181]: E0114 00:07:20.631549 3181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-n-16ff4e9fd7?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="1.6s" Jan 14 00:07:20.684234 kubelet[3181]: E0114 00:07:20.684192 3181 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 00:07:20.804443 kubelet[3181]: I0114 00:07:20.804396 3181 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:20.853950 kubelet[3181]: E0114 00:07:20.804748 3181 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:21.034794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount848727868.mount: Deactivated successfully. Jan 14 00:07:21.058036 containerd[2043]: time="2026-01-14T00:07:21.057778385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:07:21.067875 containerd[2043]: time="2026-01-14T00:07:21.067822450Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=886" Jan 14 00:07:21.075130 containerd[2043]: time="2026-01-14T00:07:21.074987554Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:07:21.082038 containerd[2043]: time="2026-01-14T00:07:21.081962474Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:07:21.084702 containerd[2043]: time="2026-01-14T00:07:21.084656331Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 00:07:21.090040 containerd[2043]: time="2026-01-14T00:07:21.088848095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:07:21.090040 containerd[2043]: time="2026-01-14T00:07:21.089639262Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.407997271s" Jan 14 00:07:21.092015 containerd[2043]: time="2026-01-14T00:07:21.091831796Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:07:21.101808 containerd[2043]: time="2026-01-14T00:07:21.101488853Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 00:07:21.101808 containerd[2043]: time="2026-01-14T00:07:21.101749943Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.400013814s" Jan 14 00:07:21.130695 containerd[2043]: time="2026-01-14T00:07:21.130654281Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.419272071s" Jan 14 00:07:21.136957 containerd[2043]: time="2026-01-14T00:07:21.136567736Z" level=info msg="connecting to shim 5998f2e289fc483b748b3ecc2acae97ebce71b4e53de137702c10be8952e9f96" address="unix:///run/containerd/s/53b08da3f879bd466189d3c35e9579e3c46403b464d939c0ea7ed130cf30667a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:07:21.152082 containerd[2043]: time="2026-01-14T00:07:21.152040284Z" level=info msg="connecting to shim 60f6a9dac40f04d6144c24cc35867e934776bb7e60041fea185154ab9825f060" address="unix:///run/containerd/s/7282b36a0796de70952107a872549b45b27e2548ea95468f288319a2cdbbce5e" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:07:21.166178 systemd[1]: Started cri-containerd-5998f2e289fc483b748b3ecc2acae97ebce71b4e53de137702c10be8952e9f96.scope - libcontainer container 5998f2e289fc483b748b3ecc2acae97ebce71b4e53de137702c10be8952e9f96. Jan 14 00:07:21.179582 containerd[2043]: time="2026-01-14T00:07:21.179534791Z" level=info msg="connecting to shim 974b4e951c38484e33567fc4749934bf488c7a56d2c85a70caec802b4cfb81f3" address="unix:///run/containerd/s/c4a44fef205a3dccb141127a1fee06737e77fa722c778c6983cf946aad14b36f" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:07:21.181164 systemd[1]: Started cri-containerd-60f6a9dac40f04d6144c24cc35867e934776bb7e60041fea185154ab9825f060.scope - libcontainer container 60f6a9dac40f04d6144c24cc35867e934776bb7e60041fea185154ab9825f060. Jan 14 00:07:21.182000 audit: BPF prog-id=107 op=LOAD Jan 14 00:07:21.183000 audit: BPF prog-id=108 op=LOAD Jan 14 00:07:21.183000 audit[3240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3229 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539393866326532383966633438336237343862336563633261636165 Jan 14 00:07:21.184000 audit: BPF prog-id=108 op=UNLOAD Jan 14 00:07:21.184000 audit[3240]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3229 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539393866326532383966633438336237343862336563633261636165 Jan 14 00:07:21.184000 audit: BPF prog-id=109 op=LOAD Jan 14 00:07:21.184000 audit[3240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3229 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539393866326532383966633438336237343862336563633261636165 Jan 14 00:07:21.184000 audit: BPF prog-id=110 op=LOAD Jan 14 00:07:21.184000 audit[3240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3229 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539393866326532383966633438336237343862336563633261636165 Jan 14 00:07:21.185000 audit: BPF prog-id=110 op=UNLOAD Jan 14 00:07:21.185000 audit[3240]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3229 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539393866326532383966633438336237343862336563633261636165 Jan 14 00:07:21.185000 audit: BPF prog-id=109 op=UNLOAD Jan 14 00:07:21.185000 audit[3240]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3229 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539393866326532383966633438336237343862336563633261636165 Jan 14 00:07:21.185000 audit: BPF prog-id=111 op=LOAD Jan 14 00:07:21.185000 audit[3240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3229 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539393866326532383966633438336237343862336563633261636165 Jan 14 00:07:21.190000 audit: BPF prog-id=112 op=LOAD Jan 14 00:07:21.192000 audit: BPF prog-id=113 op=LOAD Jan 14 00:07:21.192000 audit[3272]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3255 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663661396461633430663034643631343463323463633335383637 Jan 14 00:07:21.192000 audit: BPF prog-id=113 op=UNLOAD Jan 14 00:07:21.192000 audit[3272]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3255 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663661396461633430663034643631343463323463633335383637 Jan 14 00:07:21.192000 audit: BPF prog-id=114 op=LOAD Jan 14 00:07:21.192000 audit[3272]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3255 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663661396461633430663034643631343463323463633335383637 Jan 14 00:07:21.192000 audit: BPF prog-id=115 op=LOAD Jan 14 00:07:21.192000 audit[3272]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3255 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663661396461633430663034643631343463323463633335383637 Jan 14 00:07:21.192000 audit: BPF prog-id=115 op=UNLOAD Jan 14 00:07:21.192000 audit[3272]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3255 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663661396461633430663034643631343463323463633335383637 Jan 14 00:07:21.192000 audit: BPF prog-id=114 op=UNLOAD Jan 14 00:07:21.192000 audit[3272]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3255 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663661396461633430663034643631343463323463633335383637 Jan 14 00:07:21.192000 audit: BPF prog-id=116 op=LOAD Jan 14 00:07:21.192000 audit[3272]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3255 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663661396461633430663034643631343463323463633335383637 Jan 14 00:07:21.214232 systemd[1]: Started cri-containerd-974b4e951c38484e33567fc4749934bf488c7a56d2c85a70caec802b4cfb81f3.scope - libcontainer container 974b4e951c38484e33567fc4749934bf488c7a56d2c85a70caec802b4cfb81f3. Jan 14 00:07:21.228474 containerd[2043]: time="2026-01-14T00:07:21.228421293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-n-16ff4e9fd7,Uid:c38a995f62e6deff0a33f968a81f46dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"5998f2e289fc483b748b3ecc2acae97ebce71b4e53de137702c10be8952e9f96\"" Jan 14 00:07:21.233000 audit: BPF prog-id=117 op=LOAD Jan 14 00:07:21.234000 audit: BPF prog-id=118 op=LOAD Jan 14 00:07:21.234000 audit[3317]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3297 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937346234653935316333383438346533333536376663343734393933 Jan 14 00:07:21.234000 audit: BPF prog-id=118 op=UNLOAD Jan 14 00:07:21.234000 audit[3317]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3297 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937346234653935316333383438346533333536376663343734393933 Jan 14 00:07:21.234000 audit: BPF prog-id=119 op=LOAD Jan 14 00:07:21.234000 audit[3317]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3297 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937346234653935316333383438346533333536376663343734393933 Jan 14 00:07:21.234000 audit: BPF prog-id=120 op=LOAD Jan 14 00:07:21.234000 audit[3317]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3297 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937346234653935316333383438346533333536376663343734393933 Jan 14 00:07:21.234000 audit: BPF prog-id=120 op=UNLOAD Jan 14 00:07:21.234000 audit[3317]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3297 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937346234653935316333383438346533333536376663343734393933 Jan 14 00:07:21.234000 audit: BPF prog-id=119 op=UNLOAD Jan 14 00:07:21.234000 audit[3317]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3297 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937346234653935316333383438346533333536376663343734393933 Jan 14 00:07:21.234000 audit: BPF prog-id=121 op=LOAD Jan 14 00:07:21.234000 audit[3317]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3297 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937346234653935316333383438346533333536376663343734393933 Jan 14 00:07:21.236514 containerd[2043]: time="2026-01-14T00:07:21.236145595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7,Uid:1f3e4183f7225ed705c47a879f6021c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"60f6a9dac40f04d6144c24cc35867e934776bb7e60041fea185154ab9825f060\"" Jan 14 00:07:21.239653 containerd[2043]: time="2026-01-14T00:07:21.239622251Z" level=info msg="CreateContainer within sandbox \"5998f2e289fc483b748b3ecc2acae97ebce71b4e53de137702c10be8952e9f96\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 00:07:21.244663 containerd[2043]: time="2026-01-14T00:07:21.244629310Z" level=info msg="CreateContainer within sandbox \"60f6a9dac40f04d6144c24cc35867e934776bb7e60041fea185154ab9825f060\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 00:07:21.264460 containerd[2043]: time="2026-01-14T00:07:21.264350961Z" level=info msg="Container 1de98883f70794843b970bc4fd4f6ae27eb48809296501862d1f1a5c90888b1c: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:07:21.267330 containerd[2043]: time="2026-01-14T00:07:21.267293836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-n-16ff4e9fd7,Uid:25a2b67991fe2bc42cd444081ade252e,Namespace:kube-system,Attempt:0,} returns sandbox id \"974b4e951c38484e33567fc4749934bf488c7a56d2c85a70caec802b4cfb81f3\"" Jan 14 00:07:21.276884 containerd[2043]: time="2026-01-14T00:07:21.276821672Z" level=info msg="CreateContainer within sandbox \"974b4e951c38484e33567fc4749934bf488c7a56d2c85a70caec802b4cfb81f3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 00:07:21.276884 containerd[2043]: time="2026-01-14T00:07:21.276874138Z" level=info msg="Container 4474b5b6e798815d1fa1c46acaf9dbe594975f97fefd4075de414f4dc0ae15e8: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:07:21.298716 containerd[2043]: time="2026-01-14T00:07:21.298593643Z" level=info msg="CreateContainer within sandbox \"5998f2e289fc483b748b3ecc2acae97ebce71b4e53de137702c10be8952e9f96\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1de98883f70794843b970bc4fd4f6ae27eb48809296501862d1f1a5c90888b1c\"" Jan 14 00:07:21.300299 containerd[2043]: time="2026-01-14T00:07:21.300227747Z" level=info msg="StartContainer for \"1de98883f70794843b970bc4fd4f6ae27eb48809296501862d1f1a5c90888b1c\"" Jan 14 00:07:21.301397 containerd[2043]: time="2026-01-14T00:07:21.301339230Z" level=info msg="connecting to shim 1de98883f70794843b970bc4fd4f6ae27eb48809296501862d1f1a5c90888b1c" address="unix:///run/containerd/s/53b08da3f879bd466189d3c35e9579e3c46403b464d939c0ea7ed130cf30667a" protocol=ttrpc version=3 Jan 14 00:07:21.311517 containerd[2043]: time="2026-01-14T00:07:21.311471834Z" level=info msg="CreateContainer within sandbox \"60f6a9dac40f04d6144c24cc35867e934776bb7e60041fea185154ab9825f060\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4474b5b6e798815d1fa1c46acaf9dbe594975f97fefd4075de414f4dc0ae15e8\"" Jan 14 00:07:21.313071 containerd[2043]: time="2026-01-14T00:07:21.313043407Z" level=info msg="StartContainer for \"4474b5b6e798815d1fa1c46acaf9dbe594975f97fefd4075de414f4dc0ae15e8\"" Jan 14 00:07:21.315903 containerd[2043]: time="2026-01-14T00:07:21.315874118Z" level=info msg="connecting to shim 4474b5b6e798815d1fa1c46acaf9dbe594975f97fefd4075de414f4dc0ae15e8" address="unix:///run/containerd/s/7282b36a0796de70952107a872549b45b27e2548ea95468f288319a2cdbbce5e" protocol=ttrpc version=3 Jan 14 00:07:21.317230 systemd[1]: Started cri-containerd-1de98883f70794843b970bc4fd4f6ae27eb48809296501862d1f1a5c90888b1c.scope - libcontainer container 1de98883f70794843b970bc4fd4f6ae27eb48809296501862d1f1a5c90888b1c. Jan 14 00:07:21.328000 audit: BPF prog-id=122 op=LOAD Jan 14 00:07:21.329000 audit: BPF prog-id=123 op=LOAD Jan 14 00:07:21.329000 audit[3359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3229 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164653938383833663730373934383433623937306263346664346636 Jan 14 00:07:21.330000 audit: BPF prog-id=123 op=UNLOAD Jan 14 00:07:21.330000 audit[3359]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3229 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164653938383833663730373934383433623937306263346664346636 Jan 14 00:07:21.330000 audit: BPF prog-id=124 op=LOAD Jan 14 00:07:21.330000 audit[3359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3229 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164653938383833663730373934383433623937306263346664346636 Jan 14 00:07:21.331000 audit: BPF prog-id=125 op=LOAD Jan 14 00:07:21.331000 audit[3359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3229 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164653938383833663730373934383433623937306263346664346636 Jan 14 00:07:21.331000 audit: BPF prog-id=125 op=UNLOAD Jan 14 00:07:21.331000 audit[3359]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3229 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164653938383833663730373934383433623937306263346664346636 Jan 14 00:07:21.331000 audit: BPF prog-id=124 op=UNLOAD Jan 14 00:07:21.331000 audit[3359]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3229 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164653938383833663730373934383433623937306263346664346636 Jan 14 00:07:21.331000 audit: BPF prog-id=126 op=LOAD Jan 14 00:07:21.331000 audit[3359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3229 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164653938383833663730373934383433623937306263346664346636 Jan 14 00:07:21.344320 systemd[1]: Started cri-containerd-4474b5b6e798815d1fa1c46acaf9dbe594975f97fefd4075de414f4dc0ae15e8.scope - libcontainer container 4474b5b6e798815d1fa1c46acaf9dbe594975f97fefd4075de414f4dc0ae15e8. Jan 14 00:07:21.347404 containerd[2043]: time="2026-01-14T00:07:21.347364356Z" level=info msg="Container 6e2485c03c7d10b419f4b964eda598d163a9c9be24b82753e2a3862e70f046f4: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:07:21.362153 kubelet[3181]: E0114 00:07:21.361044 3181 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 00:07:21.361000 audit: BPF prog-id=127 op=LOAD Jan 14 00:07:21.362000 audit: BPF prog-id=128 op=LOAD Jan 14 00:07:21.362000 audit[3372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3255 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434373462356236653739383831356431666131633436616361663964 Jan 14 00:07:21.362000 audit: BPF prog-id=128 op=UNLOAD Jan 14 00:07:21.362000 audit[3372]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3255 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434373462356236653739383831356431666131633436616361663964 Jan 14 00:07:21.363000 audit: BPF prog-id=129 op=LOAD Jan 14 00:07:21.363000 audit[3372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3255 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434373462356236653739383831356431666131633436616361663964 Jan 14 00:07:21.363000 audit: BPF prog-id=130 op=LOAD Jan 14 00:07:21.363000 audit[3372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3255 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434373462356236653739383831356431666131633436616361663964 Jan 14 00:07:21.363000 audit: BPF prog-id=130 op=UNLOAD Jan 14 00:07:21.363000 audit[3372]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3255 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434373462356236653739383831356431666131633436616361663964 Jan 14 00:07:21.363000 audit: BPF prog-id=129 op=UNLOAD Jan 14 00:07:21.363000 audit[3372]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3255 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434373462356236653739383831356431666131633436616361663964 Jan 14 00:07:21.363000 audit: BPF prog-id=131 op=LOAD Jan 14 00:07:21.363000 audit[3372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3255 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434373462356236653739383831356431666131633436616361663964 Jan 14 00:07:21.370677 containerd[2043]: time="2026-01-14T00:07:21.370599608Z" level=info msg="CreateContainer within sandbox \"974b4e951c38484e33567fc4749934bf488c7a56d2c85a70caec802b4cfb81f3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6e2485c03c7d10b419f4b964eda598d163a9c9be24b82753e2a3862e70f046f4\"" Jan 14 00:07:21.371160 containerd[2043]: time="2026-01-14T00:07:21.371138749Z" level=info msg="StartContainer for \"6e2485c03c7d10b419f4b964eda598d163a9c9be24b82753e2a3862e70f046f4\"" Jan 14 00:07:21.371908 containerd[2043]: time="2026-01-14T00:07:21.371881778Z" level=info msg="connecting to shim 6e2485c03c7d10b419f4b964eda598d163a9c9be24b82753e2a3862e70f046f4" address="unix:///run/containerd/s/c4a44fef205a3dccb141127a1fee06737e77fa722c778c6983cf946aad14b36f" protocol=ttrpc version=3 Jan 14 00:07:21.381731 containerd[2043]: time="2026-01-14T00:07:21.381581653Z" level=info msg="StartContainer for \"1de98883f70794843b970bc4fd4f6ae27eb48809296501862d1f1a5c90888b1c\" returns successfully" Jan 14 00:07:21.401270 systemd[1]: Started cri-containerd-6e2485c03c7d10b419f4b964eda598d163a9c9be24b82753e2a3862e70f046f4.scope - libcontainer container 6e2485c03c7d10b419f4b964eda598d163a9c9be24b82753e2a3862e70f046f4. Jan 14 00:07:21.416165 containerd[2043]: time="2026-01-14T00:07:21.416127835Z" level=info msg="StartContainer for \"4474b5b6e798815d1fa1c46acaf9dbe594975f97fefd4075de414f4dc0ae15e8\" returns successfully" Jan 14 00:07:21.422000 audit: BPF prog-id=132 op=LOAD Jan 14 00:07:21.423000 audit: BPF prog-id=133 op=LOAD Jan 14 00:07:21.423000 audit[3403]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3297 pid=3403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665323438356330336337643130623431396634623936346564613539 Jan 14 00:07:21.423000 audit: BPF prog-id=133 op=UNLOAD Jan 14 00:07:21.423000 audit[3403]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3297 pid=3403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665323438356330336337643130623431396634623936346564613539 Jan 14 00:07:21.423000 audit: BPF prog-id=134 op=LOAD Jan 14 00:07:21.423000 audit[3403]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3297 pid=3403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665323438356330336337643130623431396634623936346564613539 Jan 14 00:07:21.423000 audit: BPF prog-id=135 op=LOAD Jan 14 00:07:21.423000 audit[3403]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3297 pid=3403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665323438356330336337643130623431396634623936346564613539 Jan 14 00:07:21.423000 audit: BPF prog-id=135 op=UNLOAD Jan 14 00:07:21.423000 audit[3403]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3297 pid=3403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665323438356330336337643130623431396634623936346564613539 Jan 14 00:07:21.423000 audit: BPF prog-id=134 op=UNLOAD Jan 14 00:07:21.423000 audit[3403]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3297 pid=3403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665323438356330336337643130623431396634623936346564613539 Jan 14 00:07:21.423000 audit: BPF prog-id=136 op=LOAD Jan 14 00:07:21.423000 audit[3403]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3297 pid=3403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:21.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665323438356330336337643130623431396634623936346564613539 Jan 14 00:07:21.470608 containerd[2043]: time="2026-01-14T00:07:21.470486695Z" level=info msg="StartContainer for \"6e2485c03c7d10b419f4b964eda598d163a9c9be24b82753e2a3862e70f046f4\" returns successfully" Jan 14 00:07:22.265054 kubelet[3181]: E0114 00:07:22.264800 3181 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-n-16ff4e9fd7\" not found" node="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:22.268480 kubelet[3181]: E0114 00:07:22.268454 3181 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-n-16ff4e9fd7\" not found" node="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:22.270799 kubelet[3181]: E0114 00:07:22.270775 3181 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-n-16ff4e9fd7\" not found" node="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:22.407873 kubelet[3181]: I0114 00:07:22.407552 3181 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:22.786017 kubelet[3181]: E0114 00:07:22.785357 3181 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547.0.0-n-16ff4e9fd7\" not found" node="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:22.904523 kubelet[3181]: I0114 00:07:22.904478 3181 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:22.904523 kubelet[3181]: E0114 00:07:22.904522 3181 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4547.0.0-n-16ff4e9fd7\": node \"ci-4547.0.0-n-16ff4e9fd7\" not found" Jan 14 00:07:22.929569 kubelet[3181]: E0114 00:07:22.929532 3181 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547.0.0-n-16ff4e9fd7\" not found" Jan 14 00:07:23.030110 kubelet[3181]: E0114 00:07:23.030068 3181 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547.0.0-n-16ff4e9fd7\" not found" Jan 14 00:07:23.130748 kubelet[3181]: E0114 00:07:23.130616 3181 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547.0.0-n-16ff4e9fd7\" not found" Jan 14 00:07:23.228353 kubelet[3181]: I0114 00:07:23.228297 3181 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:23.234925 kubelet[3181]: E0114 00:07:23.234870 3181 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:23.234925 kubelet[3181]: I0114 00:07:23.234898 3181 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:23.236719 kubelet[3181]: E0114 00:07:23.236620 3181 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-n-16ff4e9fd7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:23.236719 kubelet[3181]: I0114 00:07:23.236653 3181 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:23.238053 kubelet[3181]: E0114 00:07:23.238010 3181 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-n-16ff4e9fd7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:23.273034 kubelet[3181]: I0114 00:07:23.271722 3181 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:23.274054 kubelet[3181]: I0114 00:07:23.273627 3181 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:23.274054 kubelet[3181]: I0114 00:07:23.273909 3181 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:23.276730 kubelet[3181]: E0114 00:07:23.276369 3181 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:23.276730 kubelet[3181]: E0114 00:07:23.276549 3181 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-n-16ff4e9fd7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:23.277619 kubelet[3181]: E0114 00:07:23.277579 3181 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-n-16ff4e9fd7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:24.216781 kubelet[3181]: I0114 00:07:24.216733 3181 apiserver.go:52] "Watching apiserver" Jan 14 00:07:24.227819 kubelet[3181]: I0114 00:07:24.227778 3181 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 14 00:07:24.273798 kubelet[3181]: I0114 00:07:24.273754 3181 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:24.294125 kubelet[3181]: I0114 00:07:24.294087 3181 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 00:07:25.124441 systemd[1]: Reload requested from client PID 3462 ('systemctl') (unit session-10.scope)... Jan 14 00:07:25.124454 systemd[1]: Reloading... Jan 14 00:07:25.211073 zram_generator::config[3512]: No configuration found. Jan 14 00:07:25.376816 systemd[1]: Reloading finished in 252 ms. Jan 14 00:07:25.400159 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:07:25.414785 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 00:07:25.415342 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:07:25.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:07:25.418633 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 14 00:07:25.418676 kernel: audit: type=1131 audit(1768349245.414:415): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:07:25.420044 systemd[1]: kubelet.service: Consumed 677ms CPU time, 119.5M memory peak. Jan 14 00:07:25.426274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:07:25.432000 audit: BPF prog-id=137 op=LOAD Jan 14 00:07:25.438013 kernel: audit: type=1334 audit(1768349245.432:416): prog-id=137 op=LOAD Jan 14 00:07:25.438073 kernel: audit: type=1334 audit(1768349245.432:417): prog-id=92 op=UNLOAD Jan 14 00:07:25.432000 audit: BPF prog-id=92 op=UNLOAD Jan 14 00:07:25.436000 audit: BPF prog-id=138 op=LOAD Jan 14 00:07:25.446163 kernel: audit: type=1334 audit(1768349245.436:418): prog-id=138 op=LOAD Jan 14 00:07:25.446000 audit: BPF prog-id=139 op=LOAD Jan 14 00:07:25.446000 audit: BPF prog-id=93 op=UNLOAD Jan 14 00:07:25.457160 kernel: audit: type=1334 audit(1768349245.446:419): prog-id=139 op=LOAD Jan 14 00:07:25.457240 kernel: audit: type=1334 audit(1768349245.446:420): prog-id=93 op=UNLOAD Jan 14 00:07:25.446000 audit: BPF prog-id=94 op=UNLOAD Jan 14 00:07:25.461608 kernel: audit: type=1334 audit(1768349245.446:421): prog-id=94 op=UNLOAD Jan 14 00:07:25.446000 audit: BPF prog-id=140 op=LOAD Jan 14 00:07:25.465643 kernel: audit: type=1334 audit(1768349245.446:422): prog-id=140 op=LOAD Jan 14 00:07:25.446000 audit: BPF prog-id=90 op=UNLOAD Jan 14 00:07:25.470202 kernel: audit: type=1334 audit(1768349245.446:423): prog-id=90 op=UNLOAD Jan 14 00:07:25.451000 audit: BPF prog-id=141 op=LOAD Jan 14 00:07:25.475031 kernel: audit: type=1334 audit(1768349245.451:424): prog-id=141 op=LOAD Jan 14 00:07:25.451000 audit: BPF prog-id=104 op=UNLOAD Jan 14 00:07:25.451000 audit: BPF prog-id=142 op=LOAD Jan 14 00:07:25.451000 audit: BPF prog-id=143 op=LOAD Jan 14 00:07:25.451000 audit: BPF prog-id=105 op=UNLOAD Jan 14 00:07:25.451000 audit: BPF prog-id=106 op=UNLOAD Jan 14 00:07:25.456000 audit: BPF prog-id=144 op=LOAD Jan 14 00:07:25.456000 audit: BPF prog-id=145 op=LOAD Jan 14 00:07:25.456000 audit: BPF prog-id=95 op=UNLOAD Jan 14 00:07:25.456000 audit: BPF prog-id=96 op=UNLOAD Jan 14 00:07:25.460000 audit: BPF prog-id=146 op=LOAD Jan 14 00:07:25.460000 audit: BPF prog-id=91 op=UNLOAD Jan 14 00:07:25.464000 audit: BPF prog-id=147 op=LOAD Jan 14 00:07:25.464000 audit: BPF prog-id=101 op=UNLOAD Jan 14 00:07:25.464000 audit: BPF prog-id=148 op=LOAD Jan 14 00:07:25.469000 audit: BPF prog-id=149 op=LOAD Jan 14 00:07:25.469000 audit: BPF prog-id=102 op=UNLOAD Jan 14 00:07:25.469000 audit: BPF prog-id=103 op=UNLOAD Jan 14 00:07:25.473000 audit: BPF prog-id=150 op=LOAD Jan 14 00:07:25.474000 audit: BPF prog-id=97 op=UNLOAD Jan 14 00:07:25.474000 audit: BPF prog-id=151 op=LOAD Jan 14 00:07:25.474000 audit: BPF prog-id=87 op=UNLOAD Jan 14 00:07:25.474000 audit: BPF prog-id=152 op=LOAD Jan 14 00:07:25.474000 audit: BPF prog-id=153 op=LOAD Jan 14 00:07:25.474000 audit: BPF prog-id=88 op=UNLOAD Jan 14 00:07:25.474000 audit: BPF prog-id=89 op=UNLOAD Jan 14 00:07:25.475000 audit: BPF prog-id=154 op=LOAD Jan 14 00:07:25.475000 audit: BPF prog-id=98 op=UNLOAD Jan 14 00:07:25.475000 audit: BPF prog-id=155 op=LOAD Jan 14 00:07:25.475000 audit: BPF prog-id=156 op=LOAD Jan 14 00:07:25.475000 audit: BPF prog-id=99 op=UNLOAD Jan 14 00:07:25.475000 audit: BPF prog-id=100 op=UNLOAD Jan 14 00:07:25.612287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:07:25.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:07:25.618401 (kubelet)[3576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 00:07:25.652535 kubelet[3576]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 00:07:25.652940 kubelet[3576]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 00:07:25.653160 kubelet[3576]: I0114 00:07:25.653122 3576 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 00:07:25.659904 kubelet[3576]: I0114 00:07:25.659870 3576 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 14 00:07:25.659904 kubelet[3576]: I0114 00:07:25.659896 3576 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 00:07:25.660031 kubelet[3576]: I0114 00:07:25.659921 3576 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 14 00:07:25.660031 kubelet[3576]: I0114 00:07:25.659925 3576 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 00:07:25.660111 kubelet[3576]: I0114 00:07:25.660092 3576 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 00:07:25.661774 kubelet[3576]: I0114 00:07:25.661753 3576 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 14 00:07:25.665604 kubelet[3576]: I0114 00:07:25.665144 3576 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 00:07:25.669868 kubelet[3576]: I0114 00:07:25.669816 3576 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 00:07:25.672637 kubelet[3576]: I0114 00:07:25.672621 3576 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 14 00:07:25.672933 kubelet[3576]: I0114 00:07:25.672911 3576 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 00:07:25.673166 kubelet[3576]: I0114 00:07:25.673028 3576 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-n-16ff4e9fd7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 00:07:25.673289 kubelet[3576]: I0114 00:07:25.673276 3576 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 00:07:25.673361 kubelet[3576]: I0114 00:07:25.673354 3576 container_manager_linux.go:306] "Creating device plugin manager" Jan 14 00:07:25.673436 kubelet[3576]: I0114 00:07:25.673430 3576 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 14 00:07:25.674092 kubelet[3576]: I0114 00:07:25.674065 3576 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:07:25.675316 kubelet[3576]: I0114 00:07:25.675289 3576 kubelet.go:475] "Attempting to sync node with API server" Jan 14 00:07:25.675622 kubelet[3576]: I0114 00:07:25.675450 3576 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 00:07:25.675622 kubelet[3576]: I0114 00:07:25.675486 3576 kubelet.go:387] "Adding apiserver pod source" Jan 14 00:07:25.676089 kubelet[3576]: I0114 00:07:25.675499 3576 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 00:07:25.683337 kubelet[3576]: I0114 00:07:25.683317 3576 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 00:07:25.683777 kubelet[3576]: I0114 00:07:25.683762 3576 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 00:07:25.684910 kubelet[3576]: I0114 00:07:25.683856 3576 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 14 00:07:25.687102 kubelet[3576]: I0114 00:07:25.687072 3576 server.go:1262] "Started kubelet" Jan 14 00:07:25.689348 kubelet[3576]: I0114 00:07:25.688750 3576 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 00:07:25.690449 kubelet[3576]: I0114 00:07:25.690404 3576 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 00:07:25.691834 kubelet[3576]: I0114 00:07:25.691810 3576 server.go:310] "Adding debug handlers to kubelet server" Jan 14 00:07:25.694040 kubelet[3576]: I0114 00:07:25.694020 3576 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 14 00:07:25.694947 kubelet[3576]: I0114 00:07:25.694919 3576 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 14 00:07:25.695328 kubelet[3576]: I0114 00:07:25.695196 3576 reconciler.go:29] "Reconciler: start to sync state" Jan 14 00:07:25.697676 kubelet[3576]: I0114 00:07:25.697614 3576 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 00:07:25.697728 kubelet[3576]: I0114 00:07:25.697697 3576 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 14 00:07:25.697920 kubelet[3576]: I0114 00:07:25.697898 3576 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 00:07:25.699897 kubelet[3576]: I0114 00:07:25.699865 3576 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 00:07:25.702427 kubelet[3576]: I0114 00:07:25.702401 3576 factory.go:223] Registration of the systemd container factory successfully Jan 14 00:07:25.702602 kubelet[3576]: I0114 00:07:25.702584 3576 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 00:07:25.707491 kubelet[3576]: I0114 00:07:25.707473 3576 factory.go:223] Registration of the containerd container factory successfully Jan 14 00:07:25.727041 kubelet[3576]: E0114 00:07:25.726444 3576 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 00:07:25.729459 kubelet[3576]: I0114 00:07:25.729430 3576 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 14 00:07:25.732130 kubelet[3576]: I0114 00:07:25.732107 3576 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 14 00:07:25.732130 kubelet[3576]: I0114 00:07:25.732128 3576 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 14 00:07:25.732209 kubelet[3576]: I0114 00:07:25.732148 3576 kubelet.go:2427] "Starting kubelet main sync loop" Jan 14 00:07:25.732209 kubelet[3576]: E0114 00:07:25.732196 3576 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 00:07:25.766157 kubelet[3576]: I0114 00:07:25.766126 3576 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 00:07:25.766342 kubelet[3576]: I0114 00:07:25.766328 3576 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 00:07:25.766415 kubelet[3576]: I0114 00:07:25.766407 3576 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:07:25.767006 kubelet[3576]: I0114 00:07:25.766934 3576 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 00:07:25.767006 kubelet[3576]: I0114 00:07:25.766953 3576 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 00:07:25.767006 kubelet[3576]: I0114 00:07:25.766973 3576 policy_none.go:49] "None policy: Start" Jan 14 00:07:25.767006 kubelet[3576]: I0114 00:07:25.766982 3576 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 14 00:07:25.767140 kubelet[3576]: I0114 00:07:25.767129 3576 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 14 00:07:25.767358 kubelet[3576]: I0114 00:07:25.767315 3576 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 14 00:07:25.767358 kubelet[3576]: I0114 00:07:25.767326 3576 policy_none.go:47] "Start" Jan 14 00:07:25.778975 kubelet[3576]: E0114 00:07:25.778861 3576 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 00:07:25.779761 kubelet[3576]: I0114 00:07:25.779743 3576 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 00:07:25.779821 kubelet[3576]: I0114 00:07:25.779759 3576 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 00:07:25.780101 kubelet[3576]: I0114 00:07:25.780033 3576 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 00:07:25.781547 kubelet[3576]: E0114 00:07:25.781475 3576 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 00:07:25.834019 kubelet[3576]: I0114 00:07:25.833782 3576 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:25.835063 kubelet[3576]: I0114 00:07:25.834568 3576 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:25.835284 kubelet[3576]: I0114 00:07:25.834651 3576 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:25.845406 kubelet[3576]: I0114 00:07:25.844145 3576 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 00:07:25.845850 kubelet[3576]: I0114 00:07:25.845828 3576 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 00:07:25.846028 kubelet[3576]: I0114 00:07:25.846014 3576 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 00:07:25.846153 kubelet[3576]: E0114 00:07:25.846138 3576 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-n-16ff4e9fd7\" already exists" pod="kube-system/kube-apiserver-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:25.889675 kubelet[3576]: I0114 00:07:25.889646 3576 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:25.914126 kubelet[3576]: I0114 00:07:25.913762 3576 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:25.914126 kubelet[3576]: I0114 00:07:25.913846 3576 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:25.997018 kubelet[3576]: I0114 00:07:25.996924 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f3e4183f7225ed705c47a879f6021c7-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7\" (UID: \"1f3e4183f7225ed705c47a879f6021c7\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:25.997018 kubelet[3576]: I0114 00:07:25.996960 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1f3e4183f7225ed705c47a879f6021c7-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7\" (UID: \"1f3e4183f7225ed705c47a879f6021c7\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:25.997018 kubelet[3576]: I0114 00:07:25.996972 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f3e4183f7225ed705c47a879f6021c7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7\" (UID: \"1f3e4183f7225ed705c47a879f6021c7\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:25.997018 kubelet[3576]: I0114 00:07:25.996985 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/25a2b67991fe2bc42cd444081ade252e-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-n-16ff4e9fd7\" (UID: \"25a2b67991fe2bc42cd444081ade252e\") " pod="kube-system/kube-scheduler-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:25.997670 kubelet[3576]: I0114 00:07:25.997275 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c38a995f62e6deff0a33f968a81f46dc-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-n-16ff4e9fd7\" (UID: \"c38a995f62e6deff0a33f968a81f46dc\") " pod="kube-system/kube-apiserver-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:25.997670 kubelet[3576]: I0114 00:07:25.997294 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f3e4183f7225ed705c47a879f6021c7-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7\" (UID: \"1f3e4183f7225ed705c47a879f6021c7\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:25.997670 kubelet[3576]: I0114 00:07:25.997609 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f3e4183f7225ed705c47a879f6021c7-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7\" (UID: \"1f3e4183f7225ed705c47a879f6021c7\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:25.997670 kubelet[3576]: I0114 00:07:25.997620 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c38a995f62e6deff0a33f968a81f46dc-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-n-16ff4e9fd7\" (UID: \"c38a995f62e6deff0a33f968a81f46dc\") " pod="kube-system/kube-apiserver-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:25.997670 kubelet[3576]: I0114 00:07:25.997632 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c38a995f62e6deff0a33f968a81f46dc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-n-16ff4e9fd7\" (UID: \"c38a995f62e6deff0a33f968a81f46dc\") " pod="kube-system/kube-apiserver-ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:07:26.677030 kubelet[3576]: I0114 00:07:26.676928 3576 apiserver.go:52] "Watching apiserver" Jan 14 00:07:26.695957 kubelet[3576]: I0114 00:07:26.695912 3576 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 14 00:07:26.772846 kubelet[3576]: I0114 00:07:26.772764 3576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547.0.0-n-16ff4e9fd7" podStartSLOduration=1.772747555 podStartE2EDuration="1.772747555s" podCreationTimestamp="2026-01-14 00:07:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:07:26.771816409 +0000 UTC m=+1.150218592" watchObservedRunningTime="2026-01-14 00:07:26.772747555 +0000 UTC m=+1.151149754" Jan 14 00:07:26.794413 kubelet[3576]: I0114 00:07:26.794211 3576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547.0.0-n-16ff4e9fd7" podStartSLOduration=1.794196351 podStartE2EDuration="1.794196351s" podCreationTimestamp="2026-01-14 00:07:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:07:26.782510298 +0000 UTC m=+1.160912497" watchObservedRunningTime="2026-01-14 00:07:26.794196351 +0000 UTC m=+1.172598534" Jan 14 00:07:26.813652 kubelet[3576]: I0114 00:07:26.813572 3576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547.0.0-n-16ff4e9fd7" podStartSLOduration=2.813556575 podStartE2EDuration="2.813556575s" podCreationTimestamp="2026-01-14 00:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:07:26.79455006 +0000 UTC m=+1.172952243" watchObservedRunningTime="2026-01-14 00:07:26.813556575 +0000 UTC m=+1.191958766" Jan 14 00:07:30.759338 kubelet[3576]: I0114 00:07:30.759190 3576 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 00:07:30.760084 containerd[2043]: time="2026-01-14T00:07:30.759876118Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 00:07:30.760306 kubelet[3576]: I0114 00:07:30.760023 3576 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 00:07:31.442292 systemd[1]: Created slice kubepods-besteffort-pod1e812412_68f9_48a4_81fc_dbabd0723eb5.slice - libcontainer container kubepods-besteffort-pod1e812412_68f9_48a4_81fc_dbabd0723eb5.slice. Jan 14 00:07:31.529074 kubelet[3576]: I0114 00:07:31.529032 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e812412-68f9-48a4-81fc-dbabd0723eb5-xtables-lock\") pod \"kube-proxy-kpbk7\" (UID: \"1e812412-68f9-48a4-81fc-dbabd0723eb5\") " pod="kube-system/kube-proxy-kpbk7" Jan 14 00:07:31.529074 kubelet[3576]: I0114 00:07:31.529070 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1e812412-68f9-48a4-81fc-dbabd0723eb5-kube-proxy\") pod \"kube-proxy-kpbk7\" (UID: \"1e812412-68f9-48a4-81fc-dbabd0723eb5\") " pod="kube-system/kube-proxy-kpbk7" Jan 14 00:07:31.529074 kubelet[3576]: I0114 00:07:31.529088 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e812412-68f9-48a4-81fc-dbabd0723eb5-lib-modules\") pod \"kube-proxy-kpbk7\" (UID: \"1e812412-68f9-48a4-81fc-dbabd0723eb5\") " pod="kube-system/kube-proxy-kpbk7" Jan 14 00:07:31.529271 kubelet[3576]: I0114 00:07:31.529099 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxj6r\" (UniqueName: \"kubernetes.io/projected/1e812412-68f9-48a4-81fc-dbabd0723eb5-kube-api-access-cxj6r\") pod \"kube-proxy-kpbk7\" (UID: \"1e812412-68f9-48a4-81fc-dbabd0723eb5\") " pod="kube-system/kube-proxy-kpbk7" Jan 14 00:07:31.634260 kubelet[3576]: E0114 00:07:31.634215 3576 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 14 00:07:31.634260 kubelet[3576]: E0114 00:07:31.634251 3576 projected.go:196] Error preparing data for projected volume kube-api-access-cxj6r for pod kube-system/kube-proxy-kpbk7: configmap "kube-root-ca.crt" not found Jan 14 00:07:31.634421 kubelet[3576]: E0114 00:07:31.634339 3576 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1e812412-68f9-48a4-81fc-dbabd0723eb5-kube-api-access-cxj6r podName:1e812412-68f9-48a4-81fc-dbabd0723eb5 nodeName:}" failed. No retries permitted until 2026-01-14 00:07:32.134308413 +0000 UTC m=+6.512710596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cxj6r" (UniqueName: "kubernetes.io/projected/1e812412-68f9-48a4-81fc-dbabd0723eb5-kube-api-access-cxj6r") pod "kube-proxy-kpbk7" (UID: "1e812412-68f9-48a4-81fc-dbabd0723eb5") : configmap "kube-root-ca.crt" not found Jan 14 00:07:31.961348 systemd[1]: Created slice kubepods-besteffort-pod71f35b87_b8b2_4ac9_9851_01c5314c4bad.slice - libcontainer container kubepods-besteffort-pod71f35b87_b8b2_4ac9_9851_01c5314c4bad.slice. Jan 14 00:07:32.031288 kubelet[3576]: I0114 00:07:32.031236 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/71f35b87-b8b2-4ac9-9851-01c5314c4bad-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-r8wjk\" (UID: \"71f35b87-b8b2-4ac9-9851-01c5314c4bad\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-r8wjk" Jan 14 00:07:32.031288 kubelet[3576]: I0114 00:07:32.031292 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkpvv\" (UniqueName: \"kubernetes.io/projected/71f35b87-b8b2-4ac9-9851-01c5314c4bad-kube-api-access-bkpvv\") pod \"tigera-operator-65cdcdfd6d-r8wjk\" (UID: \"71f35b87-b8b2-4ac9-9851-01c5314c4bad\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-r8wjk" Jan 14 00:07:32.270316 containerd[2043]: time="2026-01-14T00:07:32.270213347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-r8wjk,Uid:71f35b87-b8b2-4ac9-9851-01c5314c4bad,Namespace:tigera-operator,Attempt:0,}" Jan 14 00:07:32.327179 containerd[2043]: time="2026-01-14T00:07:32.327120999Z" level=info msg="connecting to shim 8a2ccc875a7d10ff4bfd99ed251badd466b80c771b4a0f30d32b07b646e4db1f" address="unix:///run/containerd/s/e6b5ecea1489c4154218514d002b88a31bc0fade49fec6ed1ccac4859fa3967f" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:07:32.349183 systemd[1]: Started cri-containerd-8a2ccc875a7d10ff4bfd99ed251badd466b80c771b4a0f30d32b07b646e4db1f.scope - libcontainer container 8a2ccc875a7d10ff4bfd99ed251badd466b80c771b4a0f30d32b07b646e4db1f. Jan 14 00:07:32.359000 audit: BPF prog-id=157 op=LOAD Jan 14 00:07:32.363009 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 14 00:07:32.363055 kernel: audit: type=1334 audit(1768349252.359:457): prog-id=157 op=LOAD Jan 14 00:07:32.368278 containerd[2043]: time="2026-01-14T00:07:32.368211854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kpbk7,Uid:1e812412-68f9-48a4-81fc-dbabd0723eb5,Namespace:kube-system,Attempt:0,}" Jan 14 00:07:32.366000 audit: BPF prog-id=158 op=LOAD Jan 14 00:07:32.372476 kernel: audit: type=1334 audit(1768349252.366:458): prog-id=158 op=LOAD Jan 14 00:07:32.366000 audit[3643]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3632 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.388479 kernel: audit: type=1300 audit(1768349252.366:458): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3632 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861326363633837356137643130666634626664393965643235316261 Jan 14 00:07:32.405553 kernel: audit: type=1327 audit(1768349252.366:458): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861326363633837356137643130666634626664393965643235316261 Jan 14 00:07:32.366000 audit: BPF prog-id=158 op=UNLOAD Jan 14 00:07:32.410240 kernel: audit: type=1334 audit(1768349252.366:459): prog-id=158 op=UNLOAD Jan 14 00:07:32.366000 audit[3643]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3632 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.426551 kernel: audit: type=1300 audit(1768349252.366:459): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3632 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861326363633837356137643130666634626664393965643235316261 Jan 14 00:07:32.443244 kernel: audit: type=1327 audit(1768349252.366:459): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861326363633837356137643130666634626664393965643235316261 Jan 14 00:07:32.366000 audit: BPF prog-id=159 op=LOAD Jan 14 00:07:32.448143 kernel: audit: type=1334 audit(1768349252.366:460): prog-id=159 op=LOAD Jan 14 00:07:32.366000 audit[3643]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3632 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.464799 kernel: audit: type=1300 audit(1768349252.366:460): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3632 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.466021 kernel: audit: type=1327 audit(1768349252.366:460): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861326363633837356137643130666634626664393965643235316261 Jan 14 00:07:32.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861326363633837356137643130666634626664393965643235316261 Jan 14 00:07:32.366000 audit: BPF prog-id=160 op=LOAD Jan 14 00:07:32.366000 audit[3643]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3632 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861326363633837356137643130666634626664393965643235316261 Jan 14 00:07:32.366000 audit: BPF prog-id=160 op=UNLOAD Jan 14 00:07:32.366000 audit[3643]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3632 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861326363633837356137643130666634626664393965643235316261 Jan 14 00:07:32.366000 audit: BPF prog-id=159 op=UNLOAD Jan 14 00:07:32.366000 audit[3643]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3632 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861326363633837356137643130666634626664393965643235316261 Jan 14 00:07:32.366000 audit: BPF prog-id=161 op=LOAD Jan 14 00:07:32.366000 audit[3643]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3632 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861326363633837356137643130666634626664393965643235316261 Jan 14 00:07:32.503023 containerd[2043]: time="2026-01-14T00:07:32.502967835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-r8wjk,Uid:71f35b87-b8b2-4ac9-9851-01c5314c4bad,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8a2ccc875a7d10ff4bfd99ed251badd466b80c771b4a0f30d32b07b646e4db1f\"" Jan 14 00:07:32.506122 containerd[2043]: time="2026-01-14T00:07:32.506047411Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 00:07:32.518411 containerd[2043]: time="2026-01-14T00:07:32.518171216Z" level=info msg="connecting to shim 04f1adc0cfc5c16a38393a8aed6118c11a1e45d7b569a211f13bbb47e47d5d5a" address="unix:///run/containerd/s/676612da43226a8b75c6ce89900c49792b283288dc268e9b8725bb6e96a0aefb" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:07:32.536187 systemd[1]: Started cri-containerd-04f1adc0cfc5c16a38393a8aed6118c11a1e45d7b569a211f13bbb47e47d5d5a.scope - libcontainer container 04f1adc0cfc5c16a38393a8aed6118c11a1e45d7b569a211f13bbb47e47d5d5a. Jan 14 00:07:32.543000 audit: BPF prog-id=162 op=LOAD Jan 14 00:07:32.543000 audit: BPF prog-id=163 op=LOAD Jan 14 00:07:32.543000 audit[3689]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=3675 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034663161646330636663356331366133383339336138616564363131 Jan 14 00:07:32.543000 audit: BPF prog-id=163 op=UNLOAD Jan 14 00:07:32.543000 audit[3689]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3675 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034663161646330636663356331366133383339336138616564363131 Jan 14 00:07:32.544000 audit: BPF prog-id=164 op=LOAD Jan 14 00:07:32.544000 audit[3689]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=3675 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034663161646330636663356331366133383339336138616564363131 Jan 14 00:07:32.544000 audit: BPF prog-id=165 op=LOAD Jan 14 00:07:32.544000 audit[3689]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=3675 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034663161646330636663356331366133383339336138616564363131 Jan 14 00:07:32.544000 audit: BPF prog-id=165 op=UNLOAD Jan 14 00:07:32.544000 audit[3689]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3675 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034663161646330636663356331366133383339336138616564363131 Jan 14 00:07:32.544000 audit: BPF prog-id=164 op=UNLOAD Jan 14 00:07:32.544000 audit[3689]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3675 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034663161646330636663356331366133383339336138616564363131 Jan 14 00:07:32.544000 audit: BPF prog-id=166 op=LOAD Jan 14 00:07:32.544000 audit[3689]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=3675 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034663161646330636663356331366133383339336138616564363131 Jan 14 00:07:32.559267 containerd[2043]: time="2026-01-14T00:07:32.559215869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kpbk7,Uid:1e812412-68f9-48a4-81fc-dbabd0723eb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"04f1adc0cfc5c16a38393a8aed6118c11a1e45d7b569a211f13bbb47e47d5d5a\"" Jan 14 00:07:32.567968 containerd[2043]: time="2026-01-14T00:07:32.567930550Z" level=info msg="CreateContainer within sandbox \"04f1adc0cfc5c16a38393a8aed6118c11a1e45d7b569a211f13bbb47e47d5d5a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 00:07:32.588492 containerd[2043]: time="2026-01-14T00:07:32.587697443Z" level=info msg="Container abad57c09c3494aa51eba490bfbe31219c2be17bb2127930351a6074ce18c518: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:07:32.623064 containerd[2043]: time="2026-01-14T00:07:32.622988874Z" level=info msg="CreateContainer within sandbox \"04f1adc0cfc5c16a38393a8aed6118c11a1e45d7b569a211f13bbb47e47d5d5a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"abad57c09c3494aa51eba490bfbe31219c2be17bb2127930351a6074ce18c518\"" Jan 14 00:07:32.623909 containerd[2043]: time="2026-01-14T00:07:32.623869772Z" level=info msg="StartContainer for \"abad57c09c3494aa51eba490bfbe31219c2be17bb2127930351a6074ce18c518\"" Jan 14 00:07:32.625312 containerd[2043]: time="2026-01-14T00:07:32.625290227Z" level=info msg="connecting to shim abad57c09c3494aa51eba490bfbe31219c2be17bb2127930351a6074ce18c518" address="unix:///run/containerd/s/676612da43226a8b75c6ce89900c49792b283288dc268e9b8725bb6e96a0aefb" protocol=ttrpc version=3 Jan 14 00:07:32.646160 systemd[1]: Started cri-containerd-abad57c09c3494aa51eba490bfbe31219c2be17bb2127930351a6074ce18c518.scope - libcontainer container abad57c09c3494aa51eba490bfbe31219c2be17bb2127930351a6074ce18c518. Jan 14 00:07:32.677000 audit: BPF prog-id=167 op=LOAD Jan 14 00:07:32.677000 audit[3714]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3675 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162616435376330396333343934616135316562613439306266626533 Jan 14 00:07:32.677000 audit: BPF prog-id=168 op=LOAD Jan 14 00:07:32.677000 audit[3714]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3675 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162616435376330396333343934616135316562613439306266626533 Jan 14 00:07:32.677000 audit: BPF prog-id=168 op=UNLOAD Jan 14 00:07:32.677000 audit[3714]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3675 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162616435376330396333343934616135316562613439306266626533 Jan 14 00:07:32.677000 audit: BPF prog-id=167 op=UNLOAD Jan 14 00:07:32.677000 audit[3714]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3675 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162616435376330396333343934616135316562613439306266626533 Jan 14 00:07:32.677000 audit: BPF prog-id=169 op=LOAD Jan 14 00:07:32.677000 audit[3714]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3675 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162616435376330396333343934616135316562613439306266626533 Jan 14 00:07:32.698559 containerd[2043]: time="2026-01-14T00:07:32.698462908Z" level=info msg="StartContainer for \"abad57c09c3494aa51eba490bfbe31219c2be17bb2127930351a6074ce18c518\" returns successfully" Jan 14 00:07:32.780776 kubelet[3576]: I0114 00:07:32.780595 3576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kpbk7" podStartSLOduration=1.780511044 podStartE2EDuration="1.780511044s" podCreationTimestamp="2026-01-14 00:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:07:32.779642162 +0000 UTC m=+7.158044345" watchObservedRunningTime="2026-01-14 00:07:32.780511044 +0000 UTC m=+7.158913227" Jan 14 00:07:32.879000 audit[3776]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3776 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:32.879000 audit[3776]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeb351ac0 a2=0 a3=1 items=0 ppid=3727 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.879000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 00:07:32.882000 audit[3778]: NETFILTER_CFG table=mangle:58 family=2 entries=1 op=nft_register_chain pid=3778 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:32.882000 audit[3778]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff9ebf2d0 a2=0 a3=1 items=0 ppid=3727 pid=3778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.882000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 00:07:32.884000 audit[3781]: NETFILTER_CFG table=nat:59 family=10 entries=1 op=nft_register_chain pid=3781 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:32.884000 audit[3781]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdc095c70 a2=0 a3=1 items=0 ppid=3727 pid=3781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.884000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 00:07:32.885000 audit[3782]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=3782 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:32.885000 audit[3782]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc98c0ae0 a2=0 a3=1 items=0 ppid=3727 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.885000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 00:07:32.886000 audit[3783]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3783 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:32.886000 audit[3783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffffc64570 a2=0 a3=1 items=0 ppid=3727 pid=3783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.886000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 00:07:32.888000 audit[3785]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_chain pid=3785 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:32.888000 audit[3785]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd8ef01c0 a2=0 a3=1 items=0 ppid=3727 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.888000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 00:07:32.981000 audit[3786]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3786 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:32.981000 audit[3786]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe5e04fa0 a2=0 a3=1 items=0 ppid=3727 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.981000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 00:07:32.984000 audit[3788]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3788 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:32.984000 audit[3788]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffdeb6f080 a2=0 a3=1 items=0 ppid=3727 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.984000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 14 00:07:32.987000 audit[3791]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=3791 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:32.987000 audit[3791]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffddd80a30 a2=0 a3=1 items=0 ppid=3727 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.987000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 14 00:07:32.988000 audit[3792]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=3792 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:32.988000 audit[3792]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffda12a770 a2=0 a3=1 items=0 ppid=3727 pid=3792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.988000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 00:07:32.990000 audit[3794]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3794 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:32.990000 audit[3794]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd0006d20 a2=0 a3=1 items=0 ppid=3727 pid=3794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.990000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 00:07:32.991000 audit[3795]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3795 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:32.991000 audit[3795]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe6204480 a2=0 a3=1 items=0 ppid=3727 pid=3795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.991000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 00:07:32.993000 audit[3797]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3797 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:32.993000 audit[3797]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd1d0c5b0 a2=0 a3=1 items=0 ppid=3727 pid=3797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.993000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 00:07:32.996000 audit[3800]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=3800 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:32.996000 audit[3800]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff44be8b0 a2=0 a3=1 items=0 ppid=3727 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.996000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 00:07:32.997000 audit[3801]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=3801 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:32.997000 audit[3801]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe9403e00 a2=0 a3=1 items=0 ppid=3727 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.997000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 00:07:32.999000 audit[3803]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3803 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:32.999000 audit[3803]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd5ebd720 a2=0 a3=1 items=0 ppid=3727 pid=3803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:32.999000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 00:07:33.000000 audit[3804]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=3804 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:33.000000 audit[3804]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd6c31c60 a2=0 a3=1 items=0 ppid=3727 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.000000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 00:07:33.002000 audit[3806]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=3806 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:33.002000 audit[3806]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe9b772b0 a2=0 a3=1 items=0 ppid=3727 pid=3806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.002000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 14 00:07:33.005000 audit[3809]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=3809 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:33.005000 audit[3809]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff64ef190 a2=0 a3=1 items=0 ppid=3727 pid=3809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.005000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 14 00:07:33.008000 audit[3812]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=3812 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:33.008000 audit[3812]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffb4044f0 a2=0 a3=1 items=0 ppid=3727 pid=3812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.008000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 14 00:07:33.009000 audit[3813]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3813 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:33.009000 audit[3813]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc633e720 a2=0 a3=1 items=0 ppid=3727 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.009000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 00:07:33.011000 audit[3815]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3815 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:33.011000 audit[3815]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffc8423d10 a2=0 a3=1 items=0 ppid=3727 pid=3815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.011000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 00:07:33.014000 audit[3818]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=3818 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:33.014000 audit[3818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff1439390 a2=0 a3=1 items=0 ppid=3727 pid=3818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.014000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 00:07:33.015000 audit[3819]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=3819 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:33.015000 audit[3819]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe4efb650 a2=0 a3=1 items=0 ppid=3727 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.015000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 00:07:33.017000 audit[3821]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=3821 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:07:33.017000 audit[3821]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffffab4d1c0 a2=0 a3=1 items=0 ppid=3727 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.017000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 00:07:33.041000 audit[3827]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=3827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:33.041000 audit[3827]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd9fbf9f0 a2=0 a3=1 items=0 ppid=3727 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.041000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:33.047000 audit[3827]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=3827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:33.047000 audit[3827]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffd9fbf9f0 a2=0 a3=1 items=0 ppid=3727 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.047000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:33.049000 audit[3832]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3832 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.049000 audit[3832]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc83dcc40 a2=0 a3=1 items=0 ppid=3727 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.049000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 00:07:33.051000 audit[3834]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=3834 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.051000 audit[3834]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff6a719d0 a2=0 a3=1 items=0 ppid=3727 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.051000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 14 00:07:33.054000 audit[3837]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3837 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.054000 audit[3837]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff0c90d60 a2=0 a3=1 items=0 ppid=3727 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.054000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 14 00:07:33.055000 audit[3838]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3838 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.055000 audit[3838]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd740caf0 a2=0 a3=1 items=0 ppid=3727 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.055000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 00:07:33.057000 audit[3840]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3840 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.057000 audit[3840]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc67b9d30 a2=0 a3=1 items=0 ppid=3727 pid=3840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.057000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 00:07:33.059000 audit[3841]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3841 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.059000 audit[3841]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee59e160 a2=0 a3=1 items=0 ppid=3727 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.059000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 00:07:33.061000 audit[3843]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3843 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.061000 audit[3843]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc963e9d0 a2=0 a3=1 items=0 ppid=3727 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.061000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 00:07:33.064000 audit[3846]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=3846 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.064000 audit[3846]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe835cfe0 a2=0 a3=1 items=0 ppid=3727 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.064000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 00:07:33.065000 audit[3847]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=3847 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.065000 audit[3847]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff5aaf760 a2=0 a3=1 items=0 ppid=3727 pid=3847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.065000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 00:07:33.067000 audit[3849]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3849 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.067000 audit[3849]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdb98cc50 a2=0 a3=1 items=0 ppid=3727 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.067000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 00:07:33.068000 audit[3850]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=3850 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.068000 audit[3850]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffebc10dc0 a2=0 a3=1 items=0 ppid=3727 pid=3850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.068000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 00:07:33.070000 audit[3852]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=3852 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.070000 audit[3852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffef80d900 a2=0 a3=1 items=0 ppid=3727 pid=3852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.070000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 14 00:07:33.073000 audit[3855]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=3855 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.073000 audit[3855]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe2d4cd00 a2=0 a3=1 items=0 ppid=3727 pid=3855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.073000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 14 00:07:33.076000 audit[3858]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=3858 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.076000 audit[3858]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff2aedb10 a2=0 a3=1 items=0 ppid=3727 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.076000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 14 00:07:33.077000 audit[3859]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3859 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.077000 audit[3859]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcfa86200 a2=0 a3=1 items=0 ppid=3727 pid=3859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.077000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 00:07:33.079000 audit[3861]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=3861 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.079000 audit[3861]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe485fd30 a2=0 a3=1 items=0 ppid=3727 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.079000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 00:07:33.081000 audit[3864]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=3864 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.081000 audit[3864]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe32f1730 a2=0 a3=1 items=0 ppid=3727 pid=3864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.081000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 00:07:33.083000 audit[3865]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=3865 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.083000 audit[3865]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe56672a0 a2=0 a3=1 items=0 ppid=3727 pid=3865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.083000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 00:07:33.084000 audit[3867]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=3867 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.084000 audit[3867]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd67aafa0 a2=0 a3=1 items=0 ppid=3727 pid=3867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.084000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 00:07:33.085000 audit[3868]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=3868 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.085000 audit[3868]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffd9f3480 a2=0 a3=1 items=0 ppid=3727 pid=3868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.085000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 00:07:33.087000 audit[3870]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=3870 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.087000 audit[3870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe87686b0 a2=0 a3=1 items=0 ppid=3727 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.087000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 00:07:33.090000 audit[3873]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=3873 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:07:33.090000 audit[3873]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe4845af0 a2=0 a3=1 items=0 ppid=3727 pid=3873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.090000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 00:07:33.094000 audit[3875]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=3875 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 00:07:33.094000 audit[3875]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffef773c00 a2=0 a3=1 items=0 ppid=3727 pid=3875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.094000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:33.094000 audit[3875]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=3875 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 00:07:33.094000 audit[3875]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffef773c00 a2=0 a3=1 items=0 ppid=3727 pid=3875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:33.094000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:35.548886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3952462584.mount: Deactivated successfully. Jan 14 00:07:37.879553 containerd[2043]: time="2026-01-14T00:07:37.879491619Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:37.883018 containerd[2043]: time="2026-01-14T00:07:37.882924120Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 14 00:07:37.886356 containerd[2043]: time="2026-01-14T00:07:37.886315211Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:37.890951 containerd[2043]: time="2026-01-14T00:07:37.890906773Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:37.891572 containerd[2043]: time="2026-01-14T00:07:37.891235130Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 5.385035705s" Jan 14 00:07:37.891572 containerd[2043]: time="2026-01-14T00:07:37.891266395Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 14 00:07:37.903058 containerd[2043]: time="2026-01-14T00:07:37.903028066Z" level=info msg="CreateContainer within sandbox \"8a2ccc875a7d10ff4bfd99ed251badd466b80c771b4a0f30d32b07b646e4db1f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 00:07:37.924897 containerd[2043]: time="2026-01-14T00:07:37.924044264Z" level=info msg="Container de28fba669cf7ad3e99c151d5847918599657933fab241b2c586da2230b04bbb: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:07:37.938507 containerd[2043]: time="2026-01-14T00:07:37.938464638Z" level=info msg="CreateContainer within sandbox \"8a2ccc875a7d10ff4bfd99ed251badd466b80c771b4a0f30d32b07b646e4db1f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"de28fba669cf7ad3e99c151d5847918599657933fab241b2c586da2230b04bbb\"" Jan 14 00:07:37.939088 containerd[2043]: time="2026-01-14T00:07:37.939019355Z" level=info msg="StartContainer for \"de28fba669cf7ad3e99c151d5847918599657933fab241b2c586da2230b04bbb\"" Jan 14 00:07:37.941054 containerd[2043]: time="2026-01-14T00:07:37.939893085Z" level=info msg="connecting to shim de28fba669cf7ad3e99c151d5847918599657933fab241b2c586da2230b04bbb" address="unix:///run/containerd/s/e6b5ecea1489c4154218514d002b88a31bc0fade49fec6ed1ccac4859fa3967f" protocol=ttrpc version=3 Jan 14 00:07:37.957214 systemd[1]: Started cri-containerd-de28fba669cf7ad3e99c151d5847918599657933fab241b2c586da2230b04bbb.scope - libcontainer container de28fba669cf7ad3e99c151d5847918599657933fab241b2c586da2230b04bbb. Jan 14 00:07:37.969036 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 14 00:07:37.969148 kernel: audit: type=1334 audit(1768349257.964:529): prog-id=170 op=LOAD Jan 14 00:07:37.964000 audit: BPF prog-id=170 op=LOAD Jan 14 00:07:37.972000 audit: BPF prog-id=171 op=LOAD Jan 14 00:07:37.977392 kernel: audit: type=1334 audit(1768349257.972:530): prog-id=171 op=LOAD Jan 14 00:07:37.972000 audit[3884]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3632 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:37.994387 kernel: audit: type=1300 audit(1768349257.972:530): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3632 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:37.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465323866626136363963663761643365393963313531643538343739 Jan 14 00:07:38.010605 kernel: audit: type=1327 audit(1768349257.972:530): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465323866626136363963663761643365393963313531643538343739 Jan 14 00:07:37.972000 audit: BPF prog-id=171 op=UNLOAD Jan 14 00:07:38.015409 kernel: audit: type=1334 audit(1768349257.972:531): prog-id=171 op=UNLOAD Jan 14 00:07:37.972000 audit[3884]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3632 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:38.063245 kernel: audit: type=1300 audit(1768349257.972:531): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3632 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:37.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465323866626136363963663761643365393963313531643538343739 Jan 14 00:07:38.081228 kernel: audit: type=1327 audit(1768349257.972:531): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465323866626136363963663761643365393963313531643538343739 Jan 14 00:07:37.972000 audit: BPF prog-id=172 op=LOAD Jan 14 00:07:38.086870 kernel: audit: type=1334 audit(1768349257.972:532): prog-id=172 op=LOAD Jan 14 00:07:37.972000 audit[3884]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3632 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:38.104653 kernel: audit: type=1300 audit(1768349257.972:532): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3632 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:37.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465323866626136363963663761643365393963313531643538343739 Jan 14 00:07:38.123821 kernel: audit: type=1327 audit(1768349257.972:532): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465323866626136363963663761643365393963313531643538343739 Jan 14 00:07:37.972000 audit: BPF prog-id=173 op=LOAD Jan 14 00:07:37.972000 audit[3884]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3632 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:37.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465323866626136363963663761643365393963313531643538343739 Jan 14 00:07:37.972000 audit: BPF prog-id=173 op=UNLOAD Jan 14 00:07:37.972000 audit[3884]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3632 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:37.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465323866626136363963663761643365393963313531643538343739 Jan 14 00:07:37.972000 audit: BPF prog-id=172 op=UNLOAD Jan 14 00:07:37.972000 audit[3884]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3632 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:37.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465323866626136363963663761643365393963313531643538343739 Jan 14 00:07:37.972000 audit: BPF prog-id=174 op=LOAD Jan 14 00:07:37.972000 audit[3884]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3632 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:37.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465323866626136363963663761643365393963313531643538343739 Jan 14 00:07:38.125434 waagent[2250]: 2026-01-14T00:07:38.124453Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 14 00:07:38.132581 containerd[2043]: time="2026-01-14T00:07:38.132395266Z" level=info msg="StartContainer for \"de28fba669cf7ad3e99c151d5847918599657933fab241b2c586da2230b04bbb\" returns successfully" Jan 14 00:07:38.136622 waagent[2250]: 2026-01-14T00:07:38.136314Z INFO ExtHandler Jan 14 00:07:38.136622 waagent[2250]: 2026-01-14T00:07:38.136417Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f0db8106-b274-4873-ae78-5d132d9bffd4 eTag: 6535486771575694657 source: Fabric] Jan 14 00:07:38.137432 waagent[2250]: 2026-01-14T00:07:38.137291Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 14 00:07:38.139178 waagent[2250]: 2026-01-14T00:07:38.138464Z INFO ExtHandler Jan 14 00:07:38.139178 waagent[2250]: 2026-01-14T00:07:38.138520Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 14 00:07:38.207126 waagent[2250]: 2026-01-14T00:07:38.207076Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 14 00:07:38.262885 waagent[2250]: 2026-01-14T00:07:38.262821Z INFO ExtHandler Downloaded certificate {'thumbprint': '6161878275D7ECE409471864F6B6A440DB0A0DD2', 'hasPrivateKey': True} Jan 14 00:07:38.263472 waagent[2250]: 2026-01-14T00:07:38.263436Z INFO ExtHandler Fetch goal state completed Jan 14 00:07:38.263870 waagent[2250]: 2026-01-14T00:07:38.263837Z INFO ExtHandler ExtHandler Jan 14 00:07:38.264043 waagent[2250]: 2026-01-14T00:07:38.263981Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 63f60db3-2148-4e19-a989-4597f9c44047 correlation a8f35f60-99ab-447a-9c83-37934655725b created: 2026-01-14T00:07:27.762520Z] Jan 14 00:07:38.264386 waagent[2250]: 2026-01-14T00:07:38.264354Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 14 00:07:38.264859 waagent[2250]: 2026-01-14T00:07:38.264828Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 14 00:07:38.793216 kubelet[3576]: I0114 00:07:38.793149 3576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-r8wjk" podStartSLOduration=2.405911582 podStartE2EDuration="7.793134988s" podCreationTimestamp="2026-01-14 00:07:31 +0000 UTC" firstStartedPulling="2026-01-14 00:07:32.504788122 +0000 UTC m=+6.883190305" lastFinishedPulling="2026-01-14 00:07:37.892011528 +0000 UTC m=+12.270413711" observedRunningTime="2026-01-14 00:07:38.792308972 +0000 UTC m=+13.170711163" watchObservedRunningTime="2026-01-14 00:07:38.793134988 +0000 UTC m=+13.171537171" Jan 14 00:07:43.158649 sudo[2532]: pam_unix(sudo:session): session closed for user root Jan 14 00:07:43.158000 audit[2532]: USER_END pid=2532 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:07:43.162009 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 14 00:07:43.162096 kernel: audit: type=1106 audit(1768349263.158:537): pid=2532 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:07:43.158000 audit[2532]: CRED_DISP pid=2532 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:07:43.197153 kernel: audit: type=1104 audit(1768349263.158:538): pid=2532 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:07:43.231407 sshd[2531]: Connection closed by 10.200.16.10 port 40650 Jan 14 00:07:43.231299 sshd-session[2527]: pam_unix(sshd:session): session closed for user core Jan 14 00:07:43.233000 audit[2527]: USER_END pid=2527 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:07:43.268907 systemd[1]: sshd@6-10.200.20.18:22-10.200.16.10:40650.service: Deactivated successfully. Jan 14 00:07:43.270721 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 00:07:43.270923 systemd[1]: session-10.scope: Consumed 3.521s CPU time, 219.9M memory peak. Jan 14 00:07:43.233000 audit[2527]: CRED_DISP pid=2527 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:07:43.281525 systemd-logind[1997]: Session 10 logged out. Waiting for processes to exit. Jan 14 00:07:43.283663 systemd-logind[1997]: Removed session 10. Jan 14 00:07:43.287247 kernel: audit: type=1106 audit(1768349263.233:539): pid=2527 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:07:43.287319 kernel: audit: type=1104 audit(1768349263.233:540): pid=2527 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:07:43.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.18:22-10.200.16.10:40650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:07:43.302981 kernel: audit: type=1131 audit(1768349263.268:541): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.18:22-10.200.16.10:40650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:07:44.906000 audit[3966]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=3966 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:44.906000 audit[3966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffdcdaf130 a2=0 a3=1 items=0 ppid=3727 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:44.939827 kernel: audit: type=1325 audit(1768349264.906:542): table=filter:108 family=2 entries=15 op=nft_register_rule pid=3966 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:44.939930 kernel: audit: type=1300 audit(1768349264.906:542): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffdcdaf130 a2=0 a3=1 items=0 ppid=3727 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:44.906000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:44.949622 kernel: audit: type=1327 audit(1768349264.906:542): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:44.920000 audit[3966]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=3966 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:44.959356 kernel: audit: type=1325 audit(1768349264.920:543): table=nat:109 family=2 entries=12 op=nft_register_rule pid=3966 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:44.920000 audit[3966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdcdaf130 a2=0 a3=1 items=0 ppid=3727 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:44.978106 kernel: audit: type=1300 audit(1768349264.920:543): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdcdaf130 a2=0 a3=1 items=0 ppid=3727 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:44.920000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:45.975000 audit[3970]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=3970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:45.975000 audit[3970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcf9f09f0 a2=0 a3=1 items=0 ppid=3727 pid=3970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:45.975000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:45.979000 audit[3970]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=3970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:45.979000 audit[3970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcf9f09f0 a2=0 a3=1 items=0 ppid=3727 pid=3970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:45.979000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:47.199000 audit[3972]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=3972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:47.199000 audit[3972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff5597f60 a2=0 a3=1 items=0 ppid=3727 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:47.199000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:47.204000 audit[3972]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=3972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:47.204000 audit[3972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff5597f60 a2=0 a3=1 items=0 ppid=3727 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:47.204000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:48.212000 audit[3974]: NETFILTER_CFG table=filter:114 family=2 entries=19 op=nft_register_rule pid=3974 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:48.216612 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 14 00:07:48.216700 kernel: audit: type=1325 audit(1768349268.212:548): table=filter:114 family=2 entries=19 op=nft_register_rule pid=3974 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:48.227022 kernel: audit: type=1300 audit(1768349268.212:548): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff3cf1e70 a2=0 a3=1 items=0 ppid=3727 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:48.212000 audit[3974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff3cf1e70 a2=0 a3=1 items=0 ppid=3727 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:48.212000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:48.254161 kernel: audit: type=1327 audit(1768349268.212:548): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:48.226000 audit[3974]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=3974 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:48.264485 kernel: audit: type=1325 audit(1768349268.226:549): table=nat:115 family=2 entries=12 op=nft_register_rule pid=3974 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:48.226000 audit[3974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff3cf1e70 a2=0 a3=1 items=0 ppid=3727 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:48.283208 kernel: audit: type=1300 audit(1768349268.226:549): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff3cf1e70 a2=0 a3=1 items=0 ppid=3727 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:48.226000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:48.292366 kernel: audit: type=1327 audit(1768349268.226:549): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:50.025000 audit[3976]: NETFILTER_CFG table=filter:116 family=2 entries=21 op=nft_register_rule pid=3976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:50.038102 kernel: audit: type=1325 audit(1768349270.025:550): table=filter:116 family=2 entries=21 op=nft_register_rule pid=3976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:50.038241 kernel: audit: type=1300 audit(1768349270.025:550): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc31ad8b0 a2=0 a3=1 items=0 ppid=3727 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:50.025000 audit[3976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc31ad8b0 a2=0 a3=1 items=0 ppid=3727 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:50.025000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:50.067663 kernel: audit: type=1327 audit(1768349270.025:550): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:50.060000 audit[3976]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=3976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:50.078533 kernel: audit: type=1325 audit(1768349270.060:551): table=nat:117 family=2 entries=12 op=nft_register_rule pid=3976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:50.060000 audit[3976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc31ad8b0 a2=0 a3=1 items=0 ppid=3727 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:50.060000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:50.110208 systemd[1]: Created slice kubepods-besteffort-poddf166d75_e56a_403d_9e88_28e7e2493d74.slice - libcontainer container kubepods-besteffort-poddf166d75_e56a_403d_9e88_28e7e2493d74.slice. Jan 14 00:07:50.160510 kubelet[3576]: I0114 00:07:50.160175 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df166d75-e56a-403d-9e88-28e7e2493d74-tigera-ca-bundle\") pod \"calico-typha-75f58c8c9f-wm87s\" (UID: \"df166d75-e56a-403d-9e88-28e7e2493d74\") " pod="calico-system/calico-typha-75f58c8c9f-wm87s" Jan 14 00:07:50.160510 kubelet[3576]: I0114 00:07:50.160222 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/df166d75-e56a-403d-9e88-28e7e2493d74-typha-certs\") pod \"calico-typha-75f58c8c9f-wm87s\" (UID: \"df166d75-e56a-403d-9e88-28e7e2493d74\") " pod="calico-system/calico-typha-75f58c8c9f-wm87s" Jan 14 00:07:50.160510 kubelet[3576]: I0114 00:07:50.160237 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxn8t\" (UniqueName: \"kubernetes.io/projected/df166d75-e56a-403d-9e88-28e7e2493d74-kube-api-access-dxn8t\") pod \"calico-typha-75f58c8c9f-wm87s\" (UID: \"df166d75-e56a-403d-9e88-28e7e2493d74\") " pod="calico-system/calico-typha-75f58c8c9f-wm87s" Jan 14 00:07:50.248286 systemd[1]: Created slice kubepods-besteffort-pod3b5dc287_df78_4df3_a78a_d149376d7d7a.slice - libcontainer container kubepods-besteffort-pod3b5dc287_df78_4df3_a78a_d149376d7d7a.slice. Jan 14 00:07:50.260650 kubelet[3576]: I0114 00:07:50.260613 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b5dc287-df78-4df3-a78a-d149376d7d7a-xtables-lock\") pod \"calico-node-kqctj\" (UID: \"3b5dc287-df78-4df3-a78a-d149376d7d7a\") " pod="calico-system/calico-node-kqctj" Jan 14 00:07:50.261008 kubelet[3576]: I0114 00:07:50.260854 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3b5dc287-df78-4df3-a78a-d149376d7d7a-flexvol-driver-host\") pod \"calico-node-kqctj\" (UID: \"3b5dc287-df78-4df3-a78a-d149376d7d7a\") " pod="calico-system/calico-node-kqctj" Jan 14 00:07:50.261008 kubelet[3576]: I0114 00:07:50.260871 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3b5dc287-df78-4df3-a78a-d149376d7d7a-var-lib-calico\") pod \"calico-node-kqctj\" (UID: \"3b5dc287-df78-4df3-a78a-d149376d7d7a\") " pod="calico-system/calico-node-kqctj" Jan 14 00:07:50.262040 kubelet[3576]: I0114 00:07:50.260893 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3b5dc287-df78-4df3-a78a-d149376d7d7a-cni-log-dir\") pod \"calico-node-kqctj\" (UID: \"3b5dc287-df78-4df3-a78a-d149376d7d7a\") " pod="calico-system/calico-node-kqctj" Jan 14 00:07:50.262040 kubelet[3576]: I0114 00:07:50.261506 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3b5dc287-df78-4df3-a78a-d149376d7d7a-node-certs\") pod \"calico-node-kqctj\" (UID: \"3b5dc287-df78-4df3-a78a-d149376d7d7a\") " pod="calico-system/calico-node-kqctj" Jan 14 00:07:50.262040 kubelet[3576]: I0114 00:07:50.261520 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3b5dc287-df78-4df3-a78a-d149376d7d7a-var-run-calico\") pod \"calico-node-kqctj\" (UID: \"3b5dc287-df78-4df3-a78a-d149376d7d7a\") " pod="calico-system/calico-node-kqctj" Jan 14 00:07:50.262040 kubelet[3576]: I0114 00:07:50.261530 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b5dc287-df78-4df3-a78a-d149376d7d7a-lib-modules\") pod \"calico-node-kqctj\" (UID: \"3b5dc287-df78-4df3-a78a-d149376d7d7a\") " pod="calico-system/calico-node-kqctj" Jan 14 00:07:50.262040 kubelet[3576]: I0114 00:07:50.261554 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3b5dc287-df78-4df3-a78a-d149376d7d7a-cni-bin-dir\") pod \"calico-node-kqctj\" (UID: \"3b5dc287-df78-4df3-a78a-d149376d7d7a\") " pod="calico-system/calico-node-kqctj" Jan 14 00:07:50.262168 kubelet[3576]: I0114 00:07:50.261564 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3b5dc287-df78-4df3-a78a-d149376d7d7a-cni-net-dir\") pod \"calico-node-kqctj\" (UID: \"3b5dc287-df78-4df3-a78a-d149376d7d7a\") " pod="calico-system/calico-node-kqctj" Jan 14 00:07:50.262168 kubelet[3576]: I0114 00:07:50.261581 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3b5dc287-df78-4df3-a78a-d149376d7d7a-policysync\") pod \"calico-node-kqctj\" (UID: \"3b5dc287-df78-4df3-a78a-d149376d7d7a\") " pod="calico-system/calico-node-kqctj" Jan 14 00:07:50.262168 kubelet[3576]: I0114 00:07:50.261590 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvptp\" (UniqueName: \"kubernetes.io/projected/3b5dc287-df78-4df3-a78a-d149376d7d7a-kube-api-access-wvptp\") pod \"calico-node-kqctj\" (UID: \"3b5dc287-df78-4df3-a78a-d149376d7d7a\") " pod="calico-system/calico-node-kqctj" Jan 14 00:07:50.262168 kubelet[3576]: I0114 00:07:50.261601 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b5dc287-df78-4df3-a78a-d149376d7d7a-tigera-ca-bundle\") pod \"calico-node-kqctj\" (UID: \"3b5dc287-df78-4df3-a78a-d149376d7d7a\") " pod="calico-system/calico-node-kqctj" Jan 14 00:07:50.373103 kubelet[3576]: E0114 00:07:50.372819 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.373103 kubelet[3576]: W0114 00:07:50.372838 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.373103 kubelet[3576]: E0114 00:07:50.372935 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.385129 kubelet[3576]: E0114 00:07:50.384931 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.385129 kubelet[3576]: W0114 00:07:50.384956 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.385129 kubelet[3576]: E0114 00:07:50.384973 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.404948 kubelet[3576]: E0114 00:07:50.404570 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jrsmb" podUID="485f55bc-0719-47ec-b844-40d9b8f86f0d" Jan 14 00:07:50.421010 containerd[2043]: time="2026-01-14T00:07:50.420879146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75f58c8c9f-wm87s,Uid:df166d75-e56a-403d-9e88-28e7e2493d74,Namespace:calico-system,Attempt:0,}" Jan 14 00:07:50.459898 kubelet[3576]: E0114 00:07:50.459809 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.459898 kubelet[3576]: W0114 00:07:50.459844 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.459898 kubelet[3576]: E0114 00:07:50.459868 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.460393 kubelet[3576]: E0114 00:07:50.460373 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.460534 kubelet[3576]: W0114 00:07:50.460464 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.460534 kubelet[3576]: E0114 00:07:50.460514 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.460771 kubelet[3576]: E0114 00:07:50.460756 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.461462 kubelet[3576]: W0114 00:07:50.461378 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.461462 kubelet[3576]: E0114 00:07:50.461399 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.463124 kubelet[3576]: E0114 00:07:50.462488 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.463124 kubelet[3576]: W0114 00:07:50.462501 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.463124 kubelet[3576]: E0114 00:07:50.462512 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.463678 kubelet[3576]: E0114 00:07:50.463633 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.463678 kubelet[3576]: W0114 00:07:50.463645 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.463678 kubelet[3576]: E0114 00:07:50.463655 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.464016 kubelet[3576]: E0114 00:07:50.463944 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.464285 kubelet[3576]: W0114 00:07:50.464107 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.464285 kubelet[3576]: E0114 00:07:50.464124 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.465190 kubelet[3576]: E0114 00:07:50.465178 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.465321 kubelet[3576]: W0114 00:07:50.465250 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.465321 kubelet[3576]: E0114 00:07:50.465264 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.465589 kubelet[3576]: E0114 00:07:50.465542 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.465589 kubelet[3576]: W0114 00:07:50.465552 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.465589 kubelet[3576]: E0114 00:07:50.465564 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.466832 kubelet[3576]: E0114 00:07:50.466817 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.466984 kubelet[3576]: W0114 00:07:50.466925 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.466984 kubelet[3576]: E0114 00:07:50.466942 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.468161 kubelet[3576]: E0114 00:07:50.468063 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.468161 kubelet[3576]: W0114 00:07:50.468101 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.468348 kubelet[3576]: E0114 00:07:50.468261 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.468750 kubelet[3576]: E0114 00:07:50.468671 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.468750 kubelet[3576]: W0114 00:07:50.468683 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.468750 kubelet[3576]: E0114 00:07:50.468693 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.469251 kubelet[3576]: E0114 00:07:50.469188 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.469251 kubelet[3576]: W0114 00:07:50.469201 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.469251 kubelet[3576]: E0114 00:07:50.469211 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.469942 kubelet[3576]: E0114 00:07:50.469786 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.469942 kubelet[3576]: W0114 00:07:50.469797 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.469942 kubelet[3576]: E0114 00:07:50.469806 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.470789 kubelet[3576]: E0114 00:07:50.470712 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.470789 kubelet[3576]: W0114 00:07:50.470724 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.470789 kubelet[3576]: E0114 00:07:50.470733 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.471840 kubelet[3576]: E0114 00:07:50.471795 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.471913 containerd[2043]: time="2026-01-14T00:07:50.471846785Z" level=info msg="connecting to shim 80b3b9f3626fe168e86325bae80ef9ee44821f0c4a92cff6252b423f2f0d16de" address="unix:///run/containerd/s/1f38e4caf53d3f1a66799a03158804c16b850eb5cc15e1a9105f2506746ddc96" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:07:50.472079 kubelet[3576]: W0114 00:07:50.471952 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.472079 kubelet[3576]: E0114 00:07:50.471967 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.472336 kubelet[3576]: E0114 00:07:50.472299 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.472336 kubelet[3576]: W0114 00:07:50.472310 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.472336 kubelet[3576]: E0114 00:07:50.472319 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.472695 kubelet[3576]: E0114 00:07:50.472681 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.472830 kubelet[3576]: W0114 00:07:50.472760 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.472830 kubelet[3576]: E0114 00:07:50.472773 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.473073 kubelet[3576]: E0114 00:07:50.473063 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.473187 kubelet[3576]: W0114 00:07:50.473149 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.473187 kubelet[3576]: E0114 00:07:50.473170 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.473713 kubelet[3576]: E0114 00:07:50.473655 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.473713 kubelet[3576]: W0114 00:07:50.473667 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.473713 kubelet[3576]: E0114 00:07:50.473678 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.474143 kubelet[3576]: E0114 00:07:50.474040 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.474143 kubelet[3576]: W0114 00:07:50.474107 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.474143 kubelet[3576]: E0114 00:07:50.474122 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.475015 kubelet[3576]: E0114 00:07:50.474939 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.475015 kubelet[3576]: W0114 00:07:50.474953 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.475015 kubelet[3576]: E0114 00:07:50.474963 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.475723 kubelet[3576]: I0114 00:07:50.475707 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/485f55bc-0719-47ec-b844-40d9b8f86f0d-kubelet-dir\") pod \"csi-node-driver-jrsmb\" (UID: \"485f55bc-0719-47ec-b844-40d9b8f86f0d\") " pod="calico-system/csi-node-driver-jrsmb" Jan 14 00:07:50.477021 kubelet[3576]: E0114 00:07:50.476803 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.477021 kubelet[3576]: W0114 00:07:50.476819 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.477021 kubelet[3576]: E0114 00:07:50.476830 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.477021 kubelet[3576]: I0114 00:07:50.476844 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/485f55bc-0719-47ec-b844-40d9b8f86f0d-varrun\") pod \"csi-node-driver-jrsmb\" (UID: \"485f55bc-0719-47ec-b844-40d9b8f86f0d\") " pod="calico-system/csi-node-driver-jrsmb" Jan 14 00:07:50.477386 kubelet[3576]: E0114 00:07:50.477279 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.477386 kubelet[3576]: W0114 00:07:50.477332 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.477386 kubelet[3576]: E0114 00:07:50.477346 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.477386 kubelet[3576]: I0114 00:07:50.477361 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/485f55bc-0719-47ec-b844-40d9b8f86f0d-socket-dir\") pod \"csi-node-driver-jrsmb\" (UID: \"485f55bc-0719-47ec-b844-40d9b8f86f0d\") " pod="calico-system/csi-node-driver-jrsmb" Jan 14 00:07:50.478344 kubelet[3576]: E0114 00:07:50.478061 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.478344 kubelet[3576]: W0114 00:07:50.478159 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.478344 kubelet[3576]: E0114 00:07:50.478176 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.478344 kubelet[3576]: I0114 00:07:50.478190 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkm5t\" (UniqueName: \"kubernetes.io/projected/485f55bc-0719-47ec-b844-40d9b8f86f0d-kube-api-access-nkm5t\") pod \"csi-node-driver-jrsmb\" (UID: \"485f55bc-0719-47ec-b844-40d9b8f86f0d\") " pod="calico-system/csi-node-driver-jrsmb" Jan 14 00:07:50.481687 kubelet[3576]: E0114 00:07:50.481168 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.481687 kubelet[3576]: W0114 00:07:50.481180 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.481687 kubelet[3576]: E0114 00:07:50.481191 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.481687 kubelet[3576]: I0114 00:07:50.481215 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/485f55bc-0719-47ec-b844-40d9b8f86f0d-registration-dir\") pod \"csi-node-driver-jrsmb\" (UID: \"485f55bc-0719-47ec-b844-40d9b8f86f0d\") " pod="calico-system/csi-node-driver-jrsmb" Jan 14 00:07:50.481687 kubelet[3576]: E0114 00:07:50.481596 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.481687 kubelet[3576]: W0114 00:07:50.481605 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.481687 kubelet[3576]: E0114 00:07:50.481614 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.483024 kubelet[3576]: E0114 00:07:50.482040 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.483024 kubelet[3576]: W0114 00:07:50.482051 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.483024 kubelet[3576]: E0114 00:07:50.482062 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.483312 kubelet[3576]: E0114 00:07:50.483219 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.483312 kubelet[3576]: W0114 00:07:50.483231 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.483312 kubelet[3576]: E0114 00:07:50.483241 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.483615 kubelet[3576]: E0114 00:07:50.483600 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.483699 kubelet[3576]: W0114 00:07:50.483687 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.483749 kubelet[3576]: E0114 00:07:50.483738 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.484081 kubelet[3576]: E0114 00:07:50.484067 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.484164 kubelet[3576]: W0114 00:07:50.484154 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.484226 kubelet[3576]: E0114 00:07:50.484214 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.484735 kubelet[3576]: E0114 00:07:50.484719 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.484819 kubelet[3576]: W0114 00:07:50.484808 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.485639 kubelet[3576]: E0114 00:07:50.485612 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.486100 kubelet[3576]: E0114 00:07:50.486060 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.486100 kubelet[3576]: W0114 00:07:50.486072 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.486100 kubelet[3576]: E0114 00:07:50.486083 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.486463 kubelet[3576]: E0114 00:07:50.486430 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.486463 kubelet[3576]: W0114 00:07:50.486441 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.486463 kubelet[3576]: E0114 00:07:50.486451 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.486852 kubelet[3576]: E0114 00:07:50.486821 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.486852 kubelet[3576]: W0114 00:07:50.486833 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.486852 kubelet[3576]: E0114 00:07:50.486841 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.487143 kubelet[3576]: E0114 00:07:50.487130 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.487246 kubelet[3576]: W0114 00:07:50.487213 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.487246 kubelet[3576]: E0114 00:07:50.487230 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.504165 systemd[1]: Started cri-containerd-80b3b9f3626fe168e86325bae80ef9ee44821f0c4a92cff6252b423f2f0d16de.scope - libcontainer container 80b3b9f3626fe168e86325bae80ef9ee44821f0c4a92cff6252b423f2f0d16de. Jan 14 00:07:50.517000 audit: BPF prog-id=175 op=LOAD Jan 14 00:07:50.518000 audit: BPF prog-id=176 op=LOAD Jan 14 00:07:50.518000 audit[4039]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4015 pid=4039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:50.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830623362396633363236666531363865383633323562616538306566 Jan 14 00:07:50.518000 audit: BPF prog-id=176 op=UNLOAD Jan 14 00:07:50.518000 audit[4039]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4015 pid=4039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:50.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830623362396633363236666531363865383633323562616538306566 Jan 14 00:07:50.518000 audit: BPF prog-id=177 op=LOAD Jan 14 00:07:50.518000 audit[4039]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4015 pid=4039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:50.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830623362396633363236666531363865383633323562616538306566 Jan 14 00:07:50.518000 audit: BPF prog-id=178 op=LOAD Jan 14 00:07:50.518000 audit[4039]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4015 pid=4039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:50.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830623362396633363236666531363865383633323562616538306566 Jan 14 00:07:50.518000 audit: BPF prog-id=178 op=UNLOAD Jan 14 00:07:50.518000 audit[4039]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4015 pid=4039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:50.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830623362396633363236666531363865383633323562616538306566 Jan 14 00:07:50.519000 audit: BPF prog-id=177 op=UNLOAD Jan 14 00:07:50.519000 audit[4039]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4015 pid=4039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:50.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830623362396633363236666531363865383633323562616538306566 Jan 14 00:07:50.519000 audit: BPF prog-id=179 op=LOAD Jan 14 00:07:50.519000 audit[4039]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4015 pid=4039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:50.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830623362396633363236666531363865383633323562616538306566 Jan 14 00:07:50.543769 containerd[2043]: time="2026-01-14T00:07:50.543692494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75f58c8c9f-wm87s,Uid:df166d75-e56a-403d-9e88-28e7e2493d74,Namespace:calico-system,Attempt:0,} returns sandbox id \"80b3b9f3626fe168e86325bae80ef9ee44821f0c4a92cff6252b423f2f0d16de\"" Jan 14 00:07:50.546659 containerd[2043]: time="2026-01-14T00:07:50.546588467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 00:07:50.558233 containerd[2043]: time="2026-01-14T00:07:50.558072501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kqctj,Uid:3b5dc287-df78-4df3-a78a-d149376d7d7a,Namespace:calico-system,Attempt:0,}" Jan 14 00:07:50.582940 kubelet[3576]: E0114 00:07:50.582917 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.583216 kubelet[3576]: W0114 00:07:50.583088 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.583216 kubelet[3576]: E0114 00:07:50.583111 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.583455 kubelet[3576]: E0114 00:07:50.583442 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.583648 kubelet[3576]: W0114 00:07:50.583522 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.583648 kubelet[3576]: E0114 00:07:50.583542 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.584234 kubelet[3576]: E0114 00:07:50.584220 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.584432 kubelet[3576]: W0114 00:07:50.584311 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.584432 kubelet[3576]: E0114 00:07:50.584327 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.584668 kubelet[3576]: E0114 00:07:50.584563 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.584668 kubelet[3576]: W0114 00:07:50.584576 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.584668 kubelet[3576]: E0114 00:07:50.584585 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.585074 kubelet[3576]: E0114 00:07:50.585061 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.585250 kubelet[3576]: W0114 00:07:50.585131 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.585250 kubelet[3576]: E0114 00:07:50.585146 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.585452 kubelet[3576]: E0114 00:07:50.585423 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.585452 kubelet[3576]: W0114 00:07:50.585433 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.585452 kubelet[3576]: E0114 00:07:50.585442 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.585749 kubelet[3576]: E0114 00:07:50.585717 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.585749 kubelet[3576]: W0114 00:07:50.585728 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.585749 kubelet[3576]: E0114 00:07:50.585738 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.586025 kubelet[3576]: E0114 00:07:50.586014 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.586121 kubelet[3576]: W0114 00:07:50.586095 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.586121 kubelet[3576]: E0114 00:07:50.586111 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.586364 kubelet[3576]: E0114 00:07:50.586352 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.586627 kubelet[3576]: W0114 00:07:50.586421 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.586627 kubelet[3576]: E0114 00:07:50.586436 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.587038 kubelet[3576]: E0114 00:07:50.587024 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.587162 kubelet[3576]: W0114 00:07:50.587151 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.587307 kubelet[3576]: E0114 00:07:50.587252 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.587497 kubelet[3576]: E0114 00:07:50.587488 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.587605 kubelet[3576]: W0114 00:07:50.587549 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.587605 kubelet[3576]: E0114 00:07:50.587564 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.587815 kubelet[3576]: E0114 00:07:50.587805 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.587883 kubelet[3576]: W0114 00:07:50.587857 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.587883 kubelet[3576]: E0114 00:07:50.587870 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.588131 kubelet[3576]: E0114 00:07:50.588120 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.589124 kubelet[3576]: W0114 00:07:50.588168 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.589124 kubelet[3576]: E0114 00:07:50.588181 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.589124 kubelet[3576]: E0114 00:07:50.588328 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.589124 kubelet[3576]: W0114 00:07:50.588335 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.589124 kubelet[3576]: E0114 00:07:50.588343 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.589124 kubelet[3576]: E0114 00:07:50.588458 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.589124 kubelet[3576]: W0114 00:07:50.588463 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.589124 kubelet[3576]: E0114 00:07:50.588469 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.589124 kubelet[3576]: E0114 00:07:50.588659 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.589124 kubelet[3576]: W0114 00:07:50.588681 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.590112 kubelet[3576]: E0114 00:07:50.588690 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.590112 kubelet[3576]: E0114 00:07:50.589358 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.590112 kubelet[3576]: W0114 00:07:50.589368 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.590112 kubelet[3576]: E0114 00:07:50.589378 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.590344 kubelet[3576]: E0114 00:07:50.590322 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.590548 kubelet[3576]: W0114 00:07:50.590334 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.590548 kubelet[3576]: E0114 00:07:50.590488 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.590896 kubelet[3576]: E0114 00:07:50.590857 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.590896 kubelet[3576]: W0114 00:07:50.590873 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.590896 kubelet[3576]: E0114 00:07:50.590884 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.591226 kubelet[3576]: E0114 00:07:50.591216 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.591367 kubelet[3576]: W0114 00:07:50.591262 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.591367 kubelet[3576]: E0114 00:07:50.591273 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.591872 kubelet[3576]: E0114 00:07:50.591860 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.592247 kubelet[3576]: W0114 00:07:50.592154 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.592247 kubelet[3576]: E0114 00:07:50.592174 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.594087 kubelet[3576]: E0114 00:07:50.593094 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.594087 kubelet[3576]: W0114 00:07:50.593107 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.594087 kubelet[3576]: E0114 00:07:50.593117 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.594469 kubelet[3576]: E0114 00:07:50.594432 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.594469 kubelet[3576]: W0114 00:07:50.594446 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.594668 kubelet[3576]: E0114 00:07:50.594455 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.595143 kubelet[3576]: E0114 00:07:50.595128 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.595233 kubelet[3576]: W0114 00:07:50.595222 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.595674 kubelet[3576]: E0114 00:07:50.595280 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.596208 kubelet[3576]: E0114 00:07:50.596165 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.596372 kubelet[3576]: W0114 00:07:50.596307 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.596476 kubelet[3576]: E0114 00:07:50.596462 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.598064 kubelet[3576]: E0114 00:07:50.597962 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:50.598297 kubelet[3576]: W0114 00:07:50.598237 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:50.598297 kubelet[3576]: E0114 00:07:50.598275 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:50.606153 containerd[2043]: time="2026-01-14T00:07:50.606119422Z" level=info msg="connecting to shim bcc8546ed7a8e8eb3543893fadadccf43c067279d73c87e4989dd4c1892d9875" address="unix:///run/containerd/s/bca79658764d2a6b08c15cf2ad51e8692cfa0bf74bd73c51496ad0e3808ea9f6" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:07:50.624171 systemd[1]: Started cri-containerd-bcc8546ed7a8e8eb3543893fadadccf43c067279d73c87e4989dd4c1892d9875.scope - libcontainer container bcc8546ed7a8e8eb3543893fadadccf43c067279d73c87e4989dd4c1892d9875. Jan 14 00:07:50.630000 audit: BPF prog-id=180 op=LOAD Jan 14 00:07:50.630000 audit: BPF prog-id=181 op=LOAD Jan 14 00:07:50.630000 audit[4122]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4111 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:50.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263633835343665643761386538656233353433383933666164616463 Jan 14 00:07:50.631000 audit: BPF prog-id=181 op=UNLOAD Jan 14 00:07:50.631000 audit[4122]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4111 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:50.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263633835343665643761386538656233353433383933666164616463 Jan 14 00:07:50.631000 audit: BPF prog-id=182 op=LOAD Jan 14 00:07:50.631000 audit[4122]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4111 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:50.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263633835343665643761386538656233353433383933666164616463 Jan 14 00:07:50.631000 audit: BPF prog-id=183 op=LOAD Jan 14 00:07:50.631000 audit[4122]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4111 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:50.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263633835343665643761386538656233353433383933666164616463 Jan 14 00:07:50.631000 audit: BPF prog-id=183 op=UNLOAD Jan 14 00:07:50.631000 audit[4122]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4111 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:50.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263633835343665643761386538656233353433383933666164616463 Jan 14 00:07:50.631000 audit: BPF prog-id=182 op=UNLOAD Jan 14 00:07:50.631000 audit[4122]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4111 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:50.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263633835343665643761386538656233353433383933666164616463 Jan 14 00:07:50.631000 audit: BPF prog-id=184 op=LOAD Jan 14 00:07:50.631000 audit[4122]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4111 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:50.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263633835343665643761386538656233353433383933666164616463 Jan 14 00:07:50.647820 containerd[2043]: time="2026-01-14T00:07:50.647772533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kqctj,Uid:3b5dc287-df78-4df3-a78a-d149376d7d7a,Namespace:calico-system,Attempt:0,} returns sandbox id \"bcc8546ed7a8e8eb3543893fadadccf43c067279d73c87e4989dd4c1892d9875\"" Jan 14 00:07:51.087000 audit[4149]: NETFILTER_CFG table=filter:118 family=2 entries=22 op=nft_register_rule pid=4149 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:51.087000 audit[4149]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd57e0d40 a2=0 a3=1 items=0 ppid=3727 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:51.087000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:51.093000 audit[4149]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4149 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:51.093000 audit[4149]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd57e0d40 a2=0 a3=1 items=0 ppid=3727 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:51.093000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:51.732712 kubelet[3576]: E0114 00:07:51.732449 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jrsmb" podUID="485f55bc-0719-47ec-b844-40d9b8f86f0d" Jan 14 00:07:51.888155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1306687597.mount: Deactivated successfully. Jan 14 00:07:52.231659 containerd[2043]: time="2026-01-14T00:07:52.231182124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:52.234596 containerd[2043]: time="2026-01-14T00:07:52.234556340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31719231" Jan 14 00:07:52.241888 containerd[2043]: time="2026-01-14T00:07:52.241858696Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:52.245981 containerd[2043]: time="2026-01-14T00:07:52.245953355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:52.246736 containerd[2043]: time="2026-01-14T00:07:52.246716376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.700085619s" Jan 14 00:07:52.246935 containerd[2043]: time="2026-01-14T00:07:52.246874038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 14 00:07:52.248327 containerd[2043]: time="2026-01-14T00:07:52.248178039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 00:07:52.263914 containerd[2043]: time="2026-01-14T00:07:52.263882121Z" level=info msg="CreateContainer within sandbox \"80b3b9f3626fe168e86325bae80ef9ee44821f0c4a92cff6252b423f2f0d16de\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 00:07:52.284045 containerd[2043]: time="2026-01-14T00:07:52.283380074Z" level=info msg="Container 238d80f7f0f1491c1f7882eab6e76f399bda4abc171501b311f5fae9823817eb: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:07:52.302314 containerd[2043]: time="2026-01-14T00:07:52.302272404Z" level=info msg="CreateContainer within sandbox \"80b3b9f3626fe168e86325bae80ef9ee44821f0c4a92cff6252b423f2f0d16de\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"238d80f7f0f1491c1f7882eab6e76f399bda4abc171501b311f5fae9823817eb\"" Jan 14 00:07:52.303007 containerd[2043]: time="2026-01-14T00:07:52.302971823Z" level=info msg="StartContainer for \"238d80f7f0f1491c1f7882eab6e76f399bda4abc171501b311f5fae9823817eb\"" Jan 14 00:07:52.304035 containerd[2043]: time="2026-01-14T00:07:52.304013126Z" level=info msg="connecting to shim 238d80f7f0f1491c1f7882eab6e76f399bda4abc171501b311f5fae9823817eb" address="unix:///run/containerd/s/1f38e4caf53d3f1a66799a03158804c16b850eb5cc15e1a9105f2506746ddc96" protocol=ttrpc version=3 Jan 14 00:07:52.323160 systemd[1]: Started cri-containerd-238d80f7f0f1491c1f7882eab6e76f399bda4abc171501b311f5fae9823817eb.scope - libcontainer container 238d80f7f0f1491c1f7882eab6e76f399bda4abc171501b311f5fae9823817eb. Jan 14 00:07:52.330000 audit: BPF prog-id=185 op=LOAD Jan 14 00:07:52.331000 audit: BPF prog-id=186 op=LOAD Jan 14 00:07:52.331000 audit[4162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4015 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:52.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233386438306637663066313439316331663738383265616236653736 Jan 14 00:07:52.331000 audit: BPF prog-id=186 op=UNLOAD Jan 14 00:07:52.331000 audit[4162]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4015 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:52.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233386438306637663066313439316331663738383265616236653736 Jan 14 00:07:52.331000 audit: BPF prog-id=187 op=LOAD Jan 14 00:07:52.331000 audit[4162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4015 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:52.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233386438306637663066313439316331663738383265616236653736 Jan 14 00:07:52.331000 audit: BPF prog-id=188 op=LOAD Jan 14 00:07:52.331000 audit[4162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4015 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:52.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233386438306637663066313439316331663738383265616236653736 Jan 14 00:07:52.331000 audit: BPF prog-id=188 op=UNLOAD Jan 14 00:07:52.331000 audit[4162]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4015 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:52.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233386438306637663066313439316331663738383265616236653736 Jan 14 00:07:52.331000 audit: BPF prog-id=187 op=UNLOAD Jan 14 00:07:52.331000 audit[4162]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4015 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:52.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233386438306637663066313439316331663738383265616236653736 Jan 14 00:07:52.331000 audit: BPF prog-id=189 op=LOAD Jan 14 00:07:52.331000 audit[4162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4015 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:52.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233386438306637663066313439316331663738383265616236653736 Jan 14 00:07:52.364745 containerd[2043]: time="2026-01-14T00:07:52.364712309Z" level=info msg="StartContainer for \"238d80f7f0f1491c1f7882eab6e76f399bda4abc171501b311f5fae9823817eb\" returns successfully" Jan 14 00:07:52.823883 kubelet[3576]: I0114 00:07:52.823644 3576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75f58c8c9f-wm87s" podStartSLOduration=1.121181146 podStartE2EDuration="2.82362787s" podCreationTimestamp="2026-01-14 00:07:50 +0000 UTC" firstStartedPulling="2026-01-14 00:07:50.545782373 +0000 UTC m=+24.924184556" lastFinishedPulling="2026-01-14 00:07:52.248229097 +0000 UTC m=+26.626631280" observedRunningTime="2026-01-14 00:07:52.822772381 +0000 UTC m=+27.201174572" watchObservedRunningTime="2026-01-14 00:07:52.82362787 +0000 UTC m=+27.202030053" Jan 14 00:07:52.890660 kubelet[3576]: E0114 00:07:52.890625 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.890660 kubelet[3576]: W0114 00:07:52.890649 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.890872 kubelet[3576]: E0114 00:07:52.890671 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.890872 kubelet[3576]: E0114 00:07:52.890795 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.891353 kubelet[3576]: W0114 00:07:52.890801 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.891394 kubelet[3576]: E0114 00:07:52.891359 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.891678 kubelet[3576]: E0114 00:07:52.891662 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.891678 kubelet[3576]: W0114 00:07:52.891674 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.891745 kubelet[3576]: E0114 00:07:52.891684 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.891842 kubelet[3576]: E0114 00:07:52.891829 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.891842 kubelet[3576]: W0114 00:07:52.891839 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.891877 kubelet[3576]: E0114 00:07:52.891847 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.892015 kubelet[3576]: E0114 00:07:52.891989 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.892015 kubelet[3576]: W0114 00:07:52.892012 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.892071 kubelet[3576]: E0114 00:07:52.892019 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.892168 kubelet[3576]: E0114 00:07:52.892156 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.892168 kubelet[3576]: W0114 00:07:52.892166 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.892211 kubelet[3576]: E0114 00:07:52.892173 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.892327 kubelet[3576]: E0114 00:07:52.892314 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.892327 kubelet[3576]: W0114 00:07:52.892325 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.892372 kubelet[3576]: E0114 00:07:52.892332 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.892479 kubelet[3576]: E0114 00:07:52.892467 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.892479 kubelet[3576]: W0114 00:07:52.892477 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.892521 kubelet[3576]: E0114 00:07:52.892484 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.892658 kubelet[3576]: E0114 00:07:52.892647 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.892658 kubelet[3576]: W0114 00:07:52.892657 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.892707 kubelet[3576]: E0114 00:07:52.892665 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.892839 kubelet[3576]: E0114 00:07:52.892794 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.892839 kubelet[3576]: W0114 00:07:52.892831 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.892839 kubelet[3576]: E0114 00:07:52.892840 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.892967 kubelet[3576]: E0114 00:07:52.892957 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.893007 kubelet[3576]: W0114 00:07:52.892964 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.893007 kubelet[3576]: E0114 00:07:52.892985 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.893149 kubelet[3576]: E0114 00:07:52.893138 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.893149 kubelet[3576]: W0114 00:07:52.893148 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.893204 kubelet[3576]: E0114 00:07:52.893156 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.893286 kubelet[3576]: E0114 00:07:52.893271 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.893313 kubelet[3576]: W0114 00:07:52.893293 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.893313 kubelet[3576]: E0114 00:07:52.893300 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.893421 kubelet[3576]: E0114 00:07:52.893410 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.893421 kubelet[3576]: W0114 00:07:52.893419 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.893469 kubelet[3576]: E0114 00:07:52.893425 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.893827 kubelet[3576]: E0114 00:07:52.893691 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.893827 kubelet[3576]: W0114 00:07:52.893823 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.893904 kubelet[3576]: E0114 00:07:52.893838 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.901177 kubelet[3576]: E0114 00:07:52.901109 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.901177 kubelet[3576]: W0114 00:07:52.901126 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.901177 kubelet[3576]: E0114 00:07:52.901142 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.901328 kubelet[3576]: E0114 00:07:52.901301 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.901328 kubelet[3576]: W0114 00:07:52.901308 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.901328 kubelet[3576]: E0114 00:07:52.901315 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.901623 kubelet[3576]: E0114 00:07:52.901437 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.901623 kubelet[3576]: W0114 00:07:52.901442 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.901623 kubelet[3576]: E0114 00:07:52.901449 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.901741 kubelet[3576]: E0114 00:07:52.901725 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.901898 kubelet[3576]: W0114 00:07:52.901787 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.901898 kubelet[3576]: E0114 00:07:52.901805 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.902061 kubelet[3576]: E0114 00:07:52.902047 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.902163 kubelet[3576]: W0114 00:07:52.902106 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.902163 kubelet[3576]: E0114 00:07:52.902119 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.902337 kubelet[3576]: E0114 00:07:52.902328 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.902497 kubelet[3576]: W0114 00:07:52.902405 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.902497 kubelet[3576]: E0114 00:07:52.902421 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.902697 kubelet[3576]: E0114 00:07:52.902686 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.902761 kubelet[3576]: W0114 00:07:52.902751 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.902815 kubelet[3576]: E0114 00:07:52.902804 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.903341 kubelet[3576]: E0114 00:07:52.903299 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.903341 kubelet[3576]: W0114 00:07:52.903312 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.903341 kubelet[3576]: E0114 00:07:52.903322 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.903706 kubelet[3576]: E0114 00:07:52.903694 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.903849 kubelet[3576]: W0114 00:07:52.903746 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.903849 kubelet[3576]: E0114 00:07:52.903759 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.904126 kubelet[3576]: E0114 00:07:52.904036 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.904126 kubelet[3576]: W0114 00:07:52.904048 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.904126 kubelet[3576]: E0114 00:07:52.904062 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.904385 kubelet[3576]: E0114 00:07:52.904374 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.904515 kubelet[3576]: W0114 00:07:52.904444 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.904515 kubelet[3576]: E0114 00:07:52.904459 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.904780 kubelet[3576]: E0114 00:07:52.904714 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.904922 kubelet[3576]: W0114 00:07:52.904835 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.904922 kubelet[3576]: E0114 00:07:52.904858 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.905198 kubelet[3576]: E0114 00:07:52.905152 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.905198 kubelet[3576]: W0114 00:07:52.905163 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.905198 kubelet[3576]: E0114 00:07:52.905173 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.905395 kubelet[3576]: E0114 00:07:52.905378 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.905395 kubelet[3576]: W0114 00:07:52.905390 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.905455 kubelet[3576]: E0114 00:07:52.905399 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.905506 kubelet[3576]: E0114 00:07:52.905494 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.905506 kubelet[3576]: W0114 00:07:52.905503 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.905548 kubelet[3576]: E0114 00:07:52.905509 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.905627 kubelet[3576]: E0114 00:07:52.905613 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.905627 kubelet[3576]: W0114 00:07:52.905621 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.905627 kubelet[3576]: E0114 00:07:52.905626 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.905881 kubelet[3576]: E0114 00:07:52.905871 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.905934 kubelet[3576]: W0114 00:07:52.905925 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.906039 kubelet[3576]: E0114 00:07:52.905963 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:52.906254 kubelet[3576]: E0114 00:07:52.906191 3576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:07:52.906254 kubelet[3576]: W0114 00:07:52.906201 3576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:07:52.906254 kubelet[3576]: E0114 00:07:52.906215 3576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:07:53.447202 containerd[2043]: time="2026-01-14T00:07:53.447150790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:53.451784 containerd[2043]: time="2026-01-14T00:07:53.451734635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 14 00:07:53.455534 containerd[2043]: time="2026-01-14T00:07:53.455499586Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:53.459217 containerd[2043]: time="2026-01-14T00:07:53.459170620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:53.460168 containerd[2043]: time="2026-01-14T00:07:53.460137385Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.211651358s" Jan 14 00:07:53.460226 containerd[2043]: time="2026-01-14T00:07:53.460170426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 14 00:07:53.468024 containerd[2043]: time="2026-01-14T00:07:53.467381555Z" level=info msg="CreateContainer within sandbox \"bcc8546ed7a8e8eb3543893fadadccf43c067279d73c87e4989dd4c1892d9875\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 00:07:53.497841 containerd[2043]: time="2026-01-14T00:07:53.495609590Z" level=info msg="Container aa0513348f5a85939a387b2871863793d291496ea06fdf1713bad4beb9376097: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:07:53.529411 containerd[2043]: time="2026-01-14T00:07:53.529369347Z" level=info msg="CreateContainer within sandbox \"bcc8546ed7a8e8eb3543893fadadccf43c067279d73c87e4989dd4c1892d9875\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aa0513348f5a85939a387b2871863793d291496ea06fdf1713bad4beb9376097\"" Jan 14 00:07:53.530159 containerd[2043]: time="2026-01-14T00:07:53.530126071Z" level=info msg="StartContainer for \"aa0513348f5a85939a387b2871863793d291496ea06fdf1713bad4beb9376097\"" Jan 14 00:07:53.532555 containerd[2043]: time="2026-01-14T00:07:53.532141595Z" level=info msg="connecting to shim aa0513348f5a85939a387b2871863793d291496ea06fdf1713bad4beb9376097" address="unix:///run/containerd/s/bca79658764d2a6b08c15cf2ad51e8692cfa0bf74bd73c51496ad0e3808ea9f6" protocol=ttrpc version=3 Jan 14 00:07:53.553223 systemd[1]: Started cri-containerd-aa0513348f5a85939a387b2871863793d291496ea06fdf1713bad4beb9376097.scope - libcontainer container aa0513348f5a85939a387b2871863793d291496ea06fdf1713bad4beb9376097. Jan 14 00:07:53.588000 audit: BPF prog-id=190 op=LOAD Jan 14 00:07:53.592329 kernel: kauditd_printk_skb: 74 callbacks suppressed Jan 14 00:07:53.592386 kernel: audit: type=1334 audit(1768349273.588:578): prog-id=190 op=LOAD Jan 14 00:07:53.588000 audit[4237]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4111 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:53.615290 kernel: audit: type=1300 audit(1768349273.588:578): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4111 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:53.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161303531333334386635613835393339613338376232383731383633 Jan 14 00:07:53.633818 kernel: audit: type=1327 audit(1768349273.588:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161303531333334386635613835393339613338376232383731383633 Jan 14 00:07:53.588000 audit: BPF prog-id=191 op=LOAD Jan 14 00:07:53.639132 kernel: audit: type=1334 audit(1768349273.588:579): prog-id=191 op=LOAD Jan 14 00:07:53.588000 audit[4237]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4111 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:53.656977 kernel: audit: type=1300 audit(1768349273.588:579): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4111 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:53.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161303531333334386635613835393339613338376232383731383633 Jan 14 00:07:53.675223 kernel: audit: type=1327 audit(1768349273.588:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161303531333334386635613835393339613338376232383731383633 Jan 14 00:07:53.591000 audit: BPF prog-id=191 op=UNLOAD Jan 14 00:07:53.680867 kernel: audit: type=1334 audit(1768349273.591:580): prog-id=191 op=UNLOAD Jan 14 00:07:53.591000 audit[4237]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4111 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:53.698541 kernel: audit: type=1300 audit(1768349273.591:580): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4111 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:53.591000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161303531333334386635613835393339613338376232383731383633 Jan 14 00:07:53.718426 kernel: audit: type=1327 audit(1768349273.591:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161303531333334386635613835393339613338376232383731383633 Jan 14 00:07:53.591000 audit: BPF prog-id=190 op=UNLOAD Jan 14 00:07:53.723398 kernel: audit: type=1334 audit(1768349273.591:581): prog-id=190 op=UNLOAD Jan 14 00:07:53.591000 audit[4237]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4111 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:53.591000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161303531333334386635613835393339613338376232383731383633 Jan 14 00:07:53.591000 audit: BPF prog-id=192 op=LOAD Jan 14 00:07:53.591000 audit[4237]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4111 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:53.591000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161303531333334386635613835393339613338376232383731383633 Jan 14 00:07:53.734026 kubelet[3576]: E0114 00:07:53.732916 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jrsmb" podUID="485f55bc-0719-47ec-b844-40d9b8f86f0d" Jan 14 00:07:53.735043 containerd[2043]: time="2026-01-14T00:07:53.734917903Z" level=info msg="StartContainer for \"aa0513348f5a85939a387b2871863793d291496ea06fdf1713bad4beb9376097\" returns successfully" Jan 14 00:07:53.744505 systemd[1]: cri-containerd-aa0513348f5a85939a387b2871863793d291496ea06fdf1713bad4beb9376097.scope: Deactivated successfully. Jan 14 00:07:53.747000 audit: BPF prog-id=192 op=UNLOAD Jan 14 00:07:53.751544 containerd[2043]: time="2026-01-14T00:07:53.751507570Z" level=info msg="received container exit event container_id:\"aa0513348f5a85939a387b2871863793d291496ea06fdf1713bad4beb9376097\" id:\"aa0513348f5a85939a387b2871863793d291496ea06fdf1713bad4beb9376097\" pid:4249 exited_at:{seconds:1768349273 nanos:750209953}" Jan 14 00:07:53.769834 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa0513348f5a85939a387b2871863793d291496ea06fdf1713bad4beb9376097-rootfs.mount: Deactivated successfully. Jan 14 00:07:53.858000 audit[4287]: NETFILTER_CFG table=filter:120 family=2 entries=21 op=nft_register_rule pid=4287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:53.858000 audit[4287]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffca202ec0 a2=0 a3=1 items=0 ppid=3727 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:53.858000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:53.862000 audit[4287]: NETFILTER_CFG table=nat:121 family=2 entries=19 op=nft_register_chain pid=4287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:07:53.862000 audit[4287]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffca202ec0 a2=0 a3=1 items=0 ppid=3727 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:53.862000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:07:55.733662 kubelet[3576]: E0114 00:07:55.733609 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jrsmb" podUID="485f55bc-0719-47ec-b844-40d9b8f86f0d" Jan 14 00:07:55.824046 containerd[2043]: time="2026-01-14T00:07:55.823748293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 00:07:57.732902 kubelet[3576]: E0114 00:07:57.732846 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jrsmb" podUID="485f55bc-0719-47ec-b844-40d9b8f86f0d" Jan 14 00:07:58.052118 containerd[2043]: time="2026-01-14T00:07:58.051538724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:58.055396 containerd[2043]: time="2026-01-14T00:07:58.055223659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 14 00:07:58.059314 containerd[2043]: time="2026-01-14T00:07:58.059284277Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:58.126508 containerd[2043]: time="2026-01-14T00:07:58.126436207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:07:58.127136 containerd[2043]: time="2026-01-14T00:07:58.126974441Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.303185978s" Jan 14 00:07:58.127136 containerd[2043]: time="2026-01-14T00:07:58.127012930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 14 00:07:58.136377 containerd[2043]: time="2026-01-14T00:07:58.136355319Z" level=info msg="CreateContainer within sandbox \"bcc8546ed7a8e8eb3543893fadadccf43c067279d73c87e4989dd4c1892d9875\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 00:07:58.159513 containerd[2043]: time="2026-01-14T00:07:58.159479807Z" level=info msg="Container 792f839d5995eef9fd7e01b3ef142ea87afd6361f04d9128e68a02c96df129cb: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:07:58.178971 containerd[2043]: time="2026-01-14T00:07:58.178930801Z" level=info msg="CreateContainer within sandbox \"bcc8546ed7a8e8eb3543893fadadccf43c067279d73c87e4989dd4c1892d9875\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"792f839d5995eef9fd7e01b3ef142ea87afd6361f04d9128e68a02c96df129cb\"" Jan 14 00:07:58.181206 containerd[2043]: time="2026-01-14T00:07:58.179698834Z" level=info msg="StartContainer for \"792f839d5995eef9fd7e01b3ef142ea87afd6361f04d9128e68a02c96df129cb\"" Jan 14 00:07:58.181206 containerd[2043]: time="2026-01-14T00:07:58.180903305Z" level=info msg="connecting to shim 792f839d5995eef9fd7e01b3ef142ea87afd6361f04d9128e68a02c96df129cb" address="unix:///run/containerd/s/bca79658764d2a6b08c15cf2ad51e8692cfa0bf74bd73c51496ad0e3808ea9f6" protocol=ttrpc version=3 Jan 14 00:07:58.206175 systemd[1]: Started cri-containerd-792f839d5995eef9fd7e01b3ef142ea87afd6361f04d9128e68a02c96df129cb.scope - libcontainer container 792f839d5995eef9fd7e01b3ef142ea87afd6361f04d9128e68a02c96df129cb. Jan 14 00:07:58.247000 audit: BPF prog-id=193 op=LOAD Jan 14 00:07:58.247000 audit[4296]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4111 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:58.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739326638333964353939356565663966643765303162336566313432 Jan 14 00:07:58.247000 audit: BPF prog-id=194 op=LOAD Jan 14 00:07:58.247000 audit[4296]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4111 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:58.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739326638333964353939356565663966643765303162336566313432 Jan 14 00:07:58.247000 audit: BPF prog-id=194 op=UNLOAD Jan 14 00:07:58.247000 audit[4296]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4111 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:58.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739326638333964353939356565663966643765303162336566313432 Jan 14 00:07:58.247000 audit: BPF prog-id=193 op=UNLOAD Jan 14 00:07:58.247000 audit[4296]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4111 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:58.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739326638333964353939356565663966643765303162336566313432 Jan 14 00:07:58.247000 audit: BPF prog-id=195 op=LOAD Jan 14 00:07:58.247000 audit[4296]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4111 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:07:58.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739326638333964353939356565663966643765303162336566313432 Jan 14 00:07:58.268450 containerd[2043]: time="2026-01-14T00:07:58.268408298Z" level=info msg="StartContainer for \"792f839d5995eef9fd7e01b3ef142ea87afd6361f04d9128e68a02c96df129cb\" returns successfully" Jan 14 00:07:59.409777 containerd[2043]: time="2026-01-14T00:07:59.409725178Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 00:07:59.411924 systemd[1]: cri-containerd-792f839d5995eef9fd7e01b3ef142ea87afd6361f04d9128e68a02c96df129cb.scope: Deactivated successfully. Jan 14 00:07:59.412723 systemd[1]: cri-containerd-792f839d5995eef9fd7e01b3ef142ea87afd6361f04d9128e68a02c96df129cb.scope: Consumed 340ms CPU time, 192.6M memory peak, 165.9M written to disk. Jan 14 00:07:59.413863 containerd[2043]: time="2026-01-14T00:07:59.413835671Z" level=info msg="received container exit event container_id:\"792f839d5995eef9fd7e01b3ef142ea87afd6361f04d9128e68a02c96df129cb\" id:\"792f839d5995eef9fd7e01b3ef142ea87afd6361f04d9128e68a02c96df129cb\" pid:4311 exited_at:{seconds:1768349279 nanos:413559286}" Jan 14 00:07:59.416000 audit: BPF prog-id=195 op=UNLOAD Jan 14 00:07:59.420606 kernel: kauditd_printk_skb: 27 callbacks suppressed Jan 14 00:07:59.420684 kernel: audit: type=1334 audit(1768349279.416:591): prog-id=195 op=UNLOAD Jan 14 00:07:59.439415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-792f839d5995eef9fd7e01b3ef142ea87afd6361f04d9128e68a02c96df129cb-rootfs.mount: Deactivated successfully. Jan 14 00:07:59.471534 kubelet[3576]: I0114 00:07:59.471497 3576 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 14 00:08:00.316558 systemd[1]: Created slice kubepods-besteffort-pod485f55bc_0719_47ec_b844_40d9b8f86f0d.slice - libcontainer container kubepods-besteffort-pod485f55bc_0719_47ec_b844_40d9b8f86f0d.slice. Jan 14 00:08:00.323904 systemd[1]: Created slice kubepods-burstable-podf82b7507_d8a2_4f12_bcc9_0288422aaee6.slice - libcontainer container kubepods-burstable-podf82b7507_d8a2_4f12_bcc9_0288422aaee6.slice. Jan 14 00:08:00.330821 containerd[2043]: time="2026-01-14T00:08:00.330755679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jrsmb,Uid:485f55bc-0719-47ec-b844-40d9b8f86f0d,Namespace:calico-system,Attempt:0,}" Jan 14 00:08:00.337481 systemd[1]: Created slice kubepods-besteffort-pod0d1fcd1d_b4e7_438b_8969_12e4764b6063.slice - libcontainer container kubepods-besteffort-pod0d1fcd1d_b4e7_438b_8969_12e4764b6063.slice. Jan 14 00:08:00.343415 systemd[1]: Created slice kubepods-besteffort-pod9bd87eab_d892_4bfa_b953_a8d30659ec75.slice - libcontainer container kubepods-besteffort-pod9bd87eab_d892_4bfa_b953_a8d30659ec75.slice. Jan 14 00:08:00.356099 systemd[1]: Created slice kubepods-besteffort-pod703a17f2_f0e0_477c_b942_3e7b76e59fda.slice - libcontainer container kubepods-besteffort-pod703a17f2_f0e0_477c_b942_3e7b76e59fda.slice. Jan 14 00:08:00.363593 systemd[1]: Created slice kubepods-burstable-pod293030d0_9140_4eed_bcb6_fdd77ad1a81b.slice - libcontainer container kubepods-burstable-pod293030d0_9140_4eed_bcb6_fdd77ad1a81b.slice. Jan 14 00:08:00.363833 kubelet[3576]: I0114 00:08:00.363677 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqcq6\" (UniqueName: \"kubernetes.io/projected/9bd87eab-d892-4bfa-b953-a8d30659ec75-kube-api-access-tqcq6\") pod \"calico-apiserver-5df7698878-qxjdm\" (UID: \"9bd87eab-d892-4bfa-b953-a8d30659ec75\") " pod="calico-apiserver/calico-apiserver-5df7698878-qxjdm" Jan 14 00:08:00.363833 kubelet[3576]: I0114 00:08:00.363710 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f82b7507-d8a2-4f12-bcc9-0288422aaee6-config-volume\") pod \"coredns-66bc5c9577-9k9wp\" (UID: \"f82b7507-d8a2-4f12-bcc9-0288422aaee6\") " pod="kube-system/coredns-66bc5c9577-9k9wp" Jan 14 00:08:00.363833 kubelet[3576]: I0114 00:08:00.363726 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7976\" (UniqueName: \"kubernetes.io/projected/703a17f2-f0e0-477c-b942-3e7b76e59fda-kube-api-access-q7976\") pod \"calico-apiserver-5df7698878-xcd87\" (UID: \"703a17f2-f0e0-477c-b942-3e7b76e59fda\") " pod="calico-apiserver/calico-apiserver-5df7698878-xcd87" Jan 14 00:08:00.363833 kubelet[3576]: I0114 00:08:00.363738 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlpth\" (UniqueName: \"kubernetes.io/projected/0d1fcd1d-b4e7-438b-8969-12e4764b6063-kube-api-access-zlpth\") pod \"calico-kube-controllers-7757665449-ss5lr\" (UID: \"0d1fcd1d-b4e7-438b-8969-12e4764b6063\") " pod="calico-system/calico-kube-controllers-7757665449-ss5lr" Jan 14 00:08:00.363833 kubelet[3576]: I0114 00:08:00.363750 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/703a17f2-f0e0-477c-b942-3e7b76e59fda-calico-apiserver-certs\") pod \"calico-apiserver-5df7698878-xcd87\" (UID: \"703a17f2-f0e0-477c-b942-3e7b76e59fda\") " pod="calico-apiserver/calico-apiserver-5df7698878-xcd87" Jan 14 00:08:00.363948 kubelet[3576]: I0114 00:08:00.363760 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9q9j\" (UniqueName: \"kubernetes.io/projected/f82b7507-d8a2-4f12-bcc9-0288422aaee6-kube-api-access-v9q9j\") pod \"coredns-66bc5c9577-9k9wp\" (UID: \"f82b7507-d8a2-4f12-bcc9-0288422aaee6\") " pod="kube-system/coredns-66bc5c9577-9k9wp" Jan 14 00:08:00.363948 kubelet[3576]: I0114 00:08:00.363772 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb-whisker-ca-bundle\") pod \"whisker-7769f7d548-rmgl7\" (UID: \"e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb\") " pod="calico-system/whisker-7769f7d548-rmgl7" Jan 14 00:08:00.363948 kubelet[3576]: I0114 00:08:00.363781 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/293030d0-9140-4eed-bcb6-fdd77ad1a81b-config-volume\") pod \"coredns-66bc5c9577-pf5pb\" (UID: \"293030d0-9140-4eed-bcb6-fdd77ad1a81b\") " pod="kube-system/coredns-66bc5c9577-pf5pb" Jan 14 00:08:00.363948 kubelet[3576]: I0114 00:08:00.363791 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nrjp\" (UniqueName: \"kubernetes.io/projected/293030d0-9140-4eed-bcb6-fdd77ad1a81b-kube-api-access-8nrjp\") pod \"coredns-66bc5c9577-pf5pb\" (UID: \"293030d0-9140-4eed-bcb6-fdd77ad1a81b\") " pod="kube-system/coredns-66bc5c9577-pf5pb" Jan 14 00:08:00.363948 kubelet[3576]: I0114 00:08:00.363803 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb-whisker-backend-key-pair\") pod \"whisker-7769f7d548-rmgl7\" (UID: \"e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb\") " pod="calico-system/whisker-7769f7d548-rmgl7" Jan 14 00:08:00.367411 kubelet[3576]: I0114 00:08:00.363824 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc2gq\" (UniqueName: \"kubernetes.io/projected/e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb-kube-api-access-kc2gq\") pod \"whisker-7769f7d548-rmgl7\" (UID: \"e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb\") " pod="calico-system/whisker-7769f7d548-rmgl7" Jan 14 00:08:00.367411 kubelet[3576]: I0114 00:08:00.363837 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d1fcd1d-b4e7-438b-8969-12e4764b6063-tigera-ca-bundle\") pod \"calico-kube-controllers-7757665449-ss5lr\" (UID: \"0d1fcd1d-b4e7-438b-8969-12e4764b6063\") " pod="calico-system/calico-kube-controllers-7757665449-ss5lr" Jan 14 00:08:00.367411 kubelet[3576]: I0114 00:08:00.363850 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9bd87eab-d892-4bfa-b953-a8d30659ec75-calico-apiserver-certs\") pod \"calico-apiserver-5df7698878-qxjdm\" (UID: \"9bd87eab-d892-4bfa-b953-a8d30659ec75\") " pod="calico-apiserver/calico-apiserver-5df7698878-qxjdm" Jan 14 00:08:00.378647 systemd[1]: Created slice kubepods-besteffort-poda80d0249_070a_486c_a74c_948bf824745a.slice - libcontainer container kubepods-besteffort-poda80d0249_070a_486c_a74c_948bf824745a.slice. Jan 14 00:08:00.387828 systemd[1]: Created slice kubepods-besteffort-pode1d69d8a_dfca_48bb_a4a5_e96ff97c8dcb.slice - libcontainer container kubepods-besteffort-pode1d69d8a_dfca_48bb_a4a5_e96ff97c8dcb.slice. Jan 14 00:08:00.393908 systemd[1]: Created slice kubepods-besteffort-podce38d8d5_b119_4ec5_8427_02101a96fcd0.slice - libcontainer container kubepods-besteffort-podce38d8d5_b119_4ec5_8427_02101a96fcd0.slice. Jan 14 00:08:00.433385 containerd[2043]: time="2026-01-14T00:08:00.433330901Z" level=error msg="Failed to destroy network for sandbox \"6cb16078133bd22001c879bc34494f271ff66206ae7037077601dae3918f2822\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.435275 systemd[1]: run-netns-cni\x2d2ed98024\x2d8b1b\x2d8f71\x2d75dc\x2d6846696e7f1a.mount: Deactivated successfully. Jan 14 00:08:00.447108 containerd[2043]: time="2026-01-14T00:08:00.446980804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jrsmb,Uid:485f55bc-0719-47ec-b844-40d9b8f86f0d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cb16078133bd22001c879bc34494f271ff66206ae7037077601dae3918f2822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.447299 kubelet[3576]: E0114 00:08:00.447250 3576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cb16078133bd22001c879bc34494f271ff66206ae7037077601dae3918f2822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.447343 kubelet[3576]: E0114 00:08:00.447319 3576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cb16078133bd22001c879bc34494f271ff66206ae7037077601dae3918f2822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jrsmb" Jan 14 00:08:00.447343 kubelet[3576]: E0114 00:08:00.447334 3576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cb16078133bd22001c879bc34494f271ff66206ae7037077601dae3918f2822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jrsmb" Jan 14 00:08:00.447392 kubelet[3576]: E0114 00:08:00.447371 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jrsmb_calico-system(485f55bc-0719-47ec-b844-40d9b8f86f0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jrsmb_calico-system(485f55bc-0719-47ec-b844-40d9b8f86f0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6cb16078133bd22001c879bc34494f271ff66206ae7037077601dae3918f2822\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jrsmb" podUID="485f55bc-0719-47ec-b844-40d9b8f86f0d" Jan 14 00:08:00.466889 kubelet[3576]: I0114 00:08:00.464235 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wq84\" (UniqueName: \"kubernetes.io/projected/ce38d8d5-b119-4ec5-8427-02101a96fcd0-kube-api-access-8wq84\") pod \"calico-apiserver-79f767b88f-ntdwm\" (UID: \"ce38d8d5-b119-4ec5-8427-02101a96fcd0\") " pod="calico-apiserver/calico-apiserver-79f767b88f-ntdwm" Jan 14 00:08:00.466889 kubelet[3576]: I0114 00:08:00.464287 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a80d0249-070a-486c-a74c-948bf824745a-config\") pod \"goldmane-7c778bb748-zrb5n\" (UID: \"a80d0249-070a-486c-a74c-948bf824745a\") " pod="calico-system/goldmane-7c778bb748-zrb5n" Jan 14 00:08:00.466889 kubelet[3576]: I0114 00:08:00.464346 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzxd8\" (UniqueName: \"kubernetes.io/projected/a80d0249-070a-486c-a74c-948bf824745a-kube-api-access-pzxd8\") pod \"goldmane-7c778bb748-zrb5n\" (UID: \"a80d0249-070a-486c-a74c-948bf824745a\") " pod="calico-system/goldmane-7c778bb748-zrb5n" Jan 14 00:08:00.466889 kubelet[3576]: I0114 00:08:00.464365 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a80d0249-070a-486c-a74c-948bf824745a-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-zrb5n\" (UID: \"a80d0249-070a-486c-a74c-948bf824745a\") " pod="calico-system/goldmane-7c778bb748-zrb5n" Jan 14 00:08:00.466889 kubelet[3576]: I0114 00:08:00.464374 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a80d0249-070a-486c-a74c-948bf824745a-goldmane-key-pair\") pod \"goldmane-7c778bb748-zrb5n\" (UID: \"a80d0249-070a-486c-a74c-948bf824745a\") " pod="calico-system/goldmane-7c778bb748-zrb5n" Jan 14 00:08:00.467128 kubelet[3576]: I0114 00:08:00.464387 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ce38d8d5-b119-4ec5-8427-02101a96fcd0-calico-apiserver-certs\") pod \"calico-apiserver-79f767b88f-ntdwm\" (UID: \"ce38d8d5-b119-4ec5-8427-02101a96fcd0\") " pod="calico-apiserver/calico-apiserver-79f767b88f-ntdwm" Jan 14 00:08:00.638967 containerd[2043]: time="2026-01-14T00:08:00.638859502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9k9wp,Uid:f82b7507-d8a2-4f12-bcc9-0288422aaee6,Namespace:kube-system,Attempt:0,}" Jan 14 00:08:00.648198 containerd[2043]: time="2026-01-14T00:08:00.648157393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7757665449-ss5lr,Uid:0d1fcd1d-b4e7-438b-8969-12e4764b6063,Namespace:calico-system,Attempt:0,}" Jan 14 00:08:00.657702 containerd[2043]: time="2026-01-14T00:08:00.657663731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df7698878-qxjdm,Uid:9bd87eab-d892-4bfa-b953-a8d30659ec75,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:08:00.666914 containerd[2043]: time="2026-01-14T00:08:00.666873028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df7698878-xcd87,Uid:703a17f2-f0e0-477c-b942-3e7b76e59fda,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:08:00.675853 containerd[2043]: time="2026-01-14T00:08:00.675820292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pf5pb,Uid:293030d0-9140-4eed-bcb6-fdd77ad1a81b,Namespace:kube-system,Attempt:0,}" Jan 14 00:08:00.693781 containerd[2043]: time="2026-01-14T00:08:00.693730076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zrb5n,Uid:a80d0249-070a-486c-a74c-948bf824745a,Namespace:calico-system,Attempt:0,}" Jan 14 00:08:00.695208 containerd[2043]: time="2026-01-14T00:08:00.695116361Z" level=error msg="Failed to destroy network for sandbox \"bee0225ff9f774ecb4bd79b5cc05991d6f1dfee0f6a6f848d146ff19514add74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.698622 containerd[2043]: time="2026-01-14T00:08:00.698585641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7769f7d548-rmgl7,Uid:e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb,Namespace:calico-system,Attempt:0,}" Jan 14 00:08:00.703917 containerd[2043]: time="2026-01-14T00:08:00.703891140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f767b88f-ntdwm,Uid:ce38d8d5-b119-4ec5-8427-02101a96fcd0,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:08:00.728100 containerd[2043]: time="2026-01-14T00:08:00.728011660Z" level=error msg="Failed to destroy network for sandbox \"d32983547b0dfd22e26ef5f9ac253e19f04ad56ff3d080e0a6ffb64a030af1f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.736191 containerd[2043]: time="2026-01-14T00:08:00.736149162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9k9wp,Uid:f82b7507-d8a2-4f12-bcc9-0288422aaee6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bee0225ff9f774ecb4bd79b5cc05991d6f1dfee0f6a6f848d146ff19514add74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.737283 kubelet[3576]: E0114 00:08:00.737224 3576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bee0225ff9f774ecb4bd79b5cc05991d6f1dfee0f6a6f848d146ff19514add74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.737885 kubelet[3576]: E0114 00:08:00.737705 3576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bee0225ff9f774ecb4bd79b5cc05991d6f1dfee0f6a6f848d146ff19514add74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9k9wp" Jan 14 00:08:00.737885 kubelet[3576]: E0114 00:08:00.737739 3576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bee0225ff9f774ecb4bd79b5cc05991d6f1dfee0f6a6f848d146ff19514add74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9k9wp" Jan 14 00:08:00.737885 kubelet[3576]: E0114 00:08:00.737792 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9k9wp_kube-system(f82b7507-d8a2-4f12-bcc9-0288422aaee6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9k9wp_kube-system(f82b7507-d8a2-4f12-bcc9-0288422aaee6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bee0225ff9f774ecb4bd79b5cc05991d6f1dfee0f6a6f848d146ff19514add74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9k9wp" podUID="f82b7507-d8a2-4f12-bcc9-0288422aaee6" Jan 14 00:08:00.758412 containerd[2043]: time="2026-01-14T00:08:00.758363589Z" level=error msg="Failed to destroy network for sandbox \"50faad36c255706fb82c3207d6f61c3f78d0c422686924fd83684d6ea48b1721\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.788013 containerd[2043]: time="2026-01-14T00:08:00.787921269Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7757665449-ss5lr,Uid:0d1fcd1d-b4e7-438b-8969-12e4764b6063,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d32983547b0dfd22e26ef5f9ac253e19f04ad56ff3d080e0a6ffb64a030af1f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.788989 kubelet[3576]: E0114 00:08:00.788942 3576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d32983547b0dfd22e26ef5f9ac253e19f04ad56ff3d080e0a6ffb64a030af1f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.789239 kubelet[3576]: E0114 00:08:00.789215 3576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d32983547b0dfd22e26ef5f9ac253e19f04ad56ff3d080e0a6ffb64a030af1f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7757665449-ss5lr" Jan 14 00:08:00.789472 kubelet[3576]: E0114 00:08:00.789448 3576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d32983547b0dfd22e26ef5f9ac253e19f04ad56ff3d080e0a6ffb64a030af1f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7757665449-ss5lr" Jan 14 00:08:00.789599 kubelet[3576]: E0114 00:08:00.789578 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7757665449-ss5lr_calico-system(0d1fcd1d-b4e7-438b-8969-12e4764b6063)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7757665449-ss5lr_calico-system(0d1fcd1d-b4e7-438b-8969-12e4764b6063)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d32983547b0dfd22e26ef5f9ac253e19f04ad56ff3d080e0a6ffb64a030af1f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7757665449-ss5lr" podUID="0d1fcd1d-b4e7-438b-8969-12e4764b6063" Jan 14 00:08:00.793380 containerd[2043]: time="2026-01-14T00:08:00.793325771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df7698878-qxjdm,Uid:9bd87eab-d892-4bfa-b953-a8d30659ec75,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"50faad36c255706fb82c3207d6f61c3f78d0c422686924fd83684d6ea48b1721\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.793551 kubelet[3576]: E0114 00:08:00.793508 3576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50faad36c255706fb82c3207d6f61c3f78d0c422686924fd83684d6ea48b1721\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.793593 kubelet[3576]: E0114 00:08:00.793548 3576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50faad36c255706fb82c3207d6f61c3f78d0c422686924fd83684d6ea48b1721\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df7698878-qxjdm" Jan 14 00:08:00.793593 kubelet[3576]: E0114 00:08:00.793563 3576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50faad36c255706fb82c3207d6f61c3f78d0c422686924fd83684d6ea48b1721\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df7698878-qxjdm" Jan 14 00:08:00.793665 kubelet[3576]: E0114 00:08:00.793604 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5df7698878-qxjdm_calico-apiserver(9bd87eab-d892-4bfa-b953-a8d30659ec75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5df7698878-qxjdm_calico-apiserver(9bd87eab-d892-4bfa-b953-a8d30659ec75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50faad36c255706fb82c3207d6f61c3f78d0c422686924fd83684d6ea48b1721\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df7698878-qxjdm" podUID="9bd87eab-d892-4bfa-b953-a8d30659ec75" Jan 14 00:08:00.807388 containerd[2043]: time="2026-01-14T00:08:00.807264612Z" level=error msg="Failed to destroy network for sandbox \"cd5a4facd3fda736839bba787e7f362038335840551928c1967940ecc3253a54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.809534 containerd[2043]: time="2026-01-14T00:08:00.809494547Z" level=error msg="Failed to destroy network for sandbox \"05eb5a3881068c3c814221b4ba7366800c8163987a0ddbc47d90312d3ef3afbc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.821928 containerd[2043]: time="2026-01-14T00:08:00.821818120Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df7698878-xcd87,Uid:703a17f2-f0e0-477c-b942-3e7b76e59fda,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"05eb5a3881068c3c814221b4ba7366800c8163987a0ddbc47d90312d3ef3afbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.824182 kubelet[3576]: E0114 00:08:00.823921 3576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05eb5a3881068c3c814221b4ba7366800c8163987a0ddbc47d90312d3ef3afbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.824182 kubelet[3576]: E0114 00:08:00.823977 3576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05eb5a3881068c3c814221b4ba7366800c8163987a0ddbc47d90312d3ef3afbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df7698878-xcd87" Jan 14 00:08:00.824182 kubelet[3576]: E0114 00:08:00.824013 3576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05eb5a3881068c3c814221b4ba7366800c8163987a0ddbc47d90312d3ef3afbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df7698878-xcd87" Jan 14 00:08:00.824601 kubelet[3576]: E0114 00:08:00.824060 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5df7698878-xcd87_calico-apiserver(703a17f2-f0e0-477c-b942-3e7b76e59fda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5df7698878-xcd87_calico-apiserver(703a17f2-f0e0-477c-b942-3e7b76e59fda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05eb5a3881068c3c814221b4ba7366800c8163987a0ddbc47d90312d3ef3afbc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df7698878-xcd87" podUID="703a17f2-f0e0-477c-b942-3e7b76e59fda" Jan 14 00:08:00.828715 containerd[2043]: time="2026-01-14T00:08:00.828620035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pf5pb,Uid:293030d0-9140-4eed-bcb6-fdd77ad1a81b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd5a4facd3fda736839bba787e7f362038335840551928c1967940ecc3253a54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.830470 kubelet[3576]: E0114 00:08:00.828819 3576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd5a4facd3fda736839bba787e7f362038335840551928c1967940ecc3253a54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.830470 kubelet[3576]: E0114 00:08:00.828862 3576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd5a4facd3fda736839bba787e7f362038335840551928c1967940ecc3253a54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pf5pb" Jan 14 00:08:00.830470 kubelet[3576]: E0114 00:08:00.828880 3576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd5a4facd3fda736839bba787e7f362038335840551928c1967940ecc3253a54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pf5pb" Jan 14 00:08:00.832641 kubelet[3576]: E0114 00:08:00.830560 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-pf5pb_kube-system(293030d0-9140-4eed-bcb6-fdd77ad1a81b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-pf5pb_kube-system(293030d0-9140-4eed-bcb6-fdd77ad1a81b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd5a4facd3fda736839bba787e7f362038335840551928c1967940ecc3253a54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-pf5pb" podUID="293030d0-9140-4eed-bcb6-fdd77ad1a81b" Jan 14 00:08:00.865040 containerd[2043]: time="2026-01-14T00:08:00.863626570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 00:08:00.888306 containerd[2043]: time="2026-01-14T00:08:00.888237563Z" level=error msg="Failed to destroy network for sandbox \"4355c5f39c7745fb566f8aa8b175e791dd6c19d0e81f16e16b101c2756712d39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.897090 containerd[2043]: time="2026-01-14T00:08:00.896941611Z" level=error msg="Failed to destroy network for sandbox \"ddad7fd5d796f184aed89312ffabda15f49d349fe50b3734c81cb4e367ded99b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.903359 containerd[2043]: time="2026-01-14T00:08:00.903317872Z" level=error msg="Failed to destroy network for sandbox \"5b5a000ddc564275aa78b8e70d5cb1fb627cdfb9ea39435b9292227796babb0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.920775 containerd[2043]: time="2026-01-14T00:08:00.920707800Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zrb5n,Uid:a80d0249-070a-486c-a74c-948bf824745a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4355c5f39c7745fb566f8aa8b175e791dd6c19d0e81f16e16b101c2756712d39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.921070 kubelet[3576]: E0114 00:08:00.920967 3576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4355c5f39c7745fb566f8aa8b175e791dd6c19d0e81f16e16b101c2756712d39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.921142 kubelet[3576]: E0114 00:08:00.921094 3576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4355c5f39c7745fb566f8aa8b175e791dd6c19d0e81f16e16b101c2756712d39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-zrb5n" Jan 14 00:08:00.921142 kubelet[3576]: E0114 00:08:00.921124 3576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4355c5f39c7745fb566f8aa8b175e791dd6c19d0e81f16e16b101c2756712d39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-zrb5n" Jan 14 00:08:00.921399 kubelet[3576]: E0114 00:08:00.921176 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-zrb5n_calico-system(a80d0249-070a-486c-a74c-948bf824745a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-zrb5n_calico-system(a80d0249-070a-486c-a74c-948bf824745a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4355c5f39c7745fb566f8aa8b175e791dd6c19d0e81f16e16b101c2756712d39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-zrb5n" podUID="a80d0249-070a-486c-a74c-948bf824745a" Jan 14 00:08:00.939206 containerd[2043]: time="2026-01-14T00:08:00.938955556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f767b88f-ntdwm,Uid:ce38d8d5-b119-4ec5-8427-02101a96fcd0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b5a000ddc564275aa78b8e70d5cb1fb627cdfb9ea39435b9292227796babb0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.939638 kubelet[3576]: E0114 00:08:00.939570 3576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b5a000ddc564275aa78b8e70d5cb1fb627cdfb9ea39435b9292227796babb0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.939638 kubelet[3576]: E0114 00:08:00.939621 3576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b5a000ddc564275aa78b8e70d5cb1fb627cdfb9ea39435b9292227796babb0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79f767b88f-ntdwm" Jan 14 00:08:00.939829 kubelet[3576]: E0114 00:08:00.939762 3576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b5a000ddc564275aa78b8e70d5cb1fb627cdfb9ea39435b9292227796babb0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79f767b88f-ntdwm" Jan 14 00:08:00.939904 kubelet[3576]: E0114 00:08:00.939882 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79f767b88f-ntdwm_calico-apiserver(ce38d8d5-b119-4ec5-8427-02101a96fcd0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79f767b88f-ntdwm_calico-apiserver(ce38d8d5-b119-4ec5-8427-02101a96fcd0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b5a000ddc564275aa78b8e70d5cb1fb627cdfb9ea39435b9292227796babb0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79f767b88f-ntdwm" podUID="ce38d8d5-b119-4ec5-8427-02101a96fcd0" Jan 14 00:08:00.942129 containerd[2043]: time="2026-01-14T00:08:00.942092416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7769f7d548-rmgl7,Uid:e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddad7fd5d796f184aed89312ffabda15f49d349fe50b3734c81cb4e367ded99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.942484 kubelet[3576]: E0114 00:08:00.942420 3576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddad7fd5d796f184aed89312ffabda15f49d349fe50b3734c81cb4e367ded99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:00.942484 kubelet[3576]: E0114 00:08:00.942455 3576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddad7fd5d796f184aed89312ffabda15f49d349fe50b3734c81cb4e367ded99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7769f7d548-rmgl7" Jan 14 00:08:00.942625 kubelet[3576]: E0114 00:08:00.942471 3576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddad7fd5d796f184aed89312ffabda15f49d349fe50b3734c81cb4e367ded99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7769f7d548-rmgl7" Jan 14 00:08:00.942740 kubelet[3576]: E0114 00:08:00.942687 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7769f7d548-rmgl7_calico-system(e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7769f7d548-rmgl7_calico-system(e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddad7fd5d796f184aed89312ffabda15f49d349fe50b3734c81cb4e367ded99b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7769f7d548-rmgl7" podUID="e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb" Jan 14 00:08:10.426647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4083900876.mount: Deactivated successfully. Jan 14 00:08:13.810258 containerd[2043]: time="2026-01-14T00:08:13.810194925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7769f7d548-rmgl7,Uid:e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb,Namespace:calico-system,Attempt:0,}" Jan 14 00:08:15.295292 containerd[2043]: time="2026-01-14T00:08:15.295207165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df7698878-xcd87,Uid:703a17f2-f0e0-477c-b942-3e7b76e59fda,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:08:16.645034 containerd[2043]: time="2026-01-14T00:08:16.644919422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df7698878-qxjdm,Uid:9bd87eab-d892-4bfa-b953-a8d30659ec75,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:08:17.999008 containerd[2043]: time="2026-01-14T00:08:17.998898874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f767b88f-ntdwm,Uid:ce38d8d5-b119-4ec5-8427-02101a96fcd0,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:08:18.205016 containerd[2043]: time="2026-01-14T00:08:18.204956871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:08:18.205787 containerd[2043]: time="2026-01-14T00:08:18.205596190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pf5pb,Uid:293030d0-9140-4eed-bcb6-fdd77ad1a81b,Namespace:kube-system,Attempt:0,}" Jan 14 00:08:18.298780 containerd[2043]: time="2026-01-14T00:08:18.298547993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zrb5n,Uid:a80d0249-070a-486c-a74c-948bf824745a,Namespace:calico-system,Attempt:0,}" Jan 14 00:08:18.347946 containerd[2043]: time="2026-01-14T00:08:18.347895890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jrsmb,Uid:485f55bc-0719-47ec-b844-40d9b8f86f0d,Namespace:calico-system,Attempt:0,}" Jan 14 00:08:18.411956 containerd[2043]: time="2026-01-14T00:08:18.411915406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7757665449-ss5lr,Uid:0d1fcd1d-b4e7-438b-8969-12e4764b6063,Namespace:calico-system,Attempt:0,}" Jan 14 00:08:18.461335 containerd[2043]: time="2026-01-14T00:08:18.461277799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9k9wp,Uid:f82b7507-d8a2-4f12-bcc9-0288422aaee6,Namespace:kube-system,Attempt:0,}" Jan 14 00:08:18.463163 containerd[2043]: time="2026-01-14T00:08:18.463116098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 14 00:08:18.578391 containerd[2043]: time="2026-01-14T00:08:18.578202581Z" level=error msg="Failed to destroy network for sandbox \"9187b1320b39e122fec6b3d53db648f2eef5c5b2dd76b87e7cd3a06e3763805e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.579699 systemd[1]: run-netns-cni\x2d110c4d9d\x2daf55\x2d699e\x2dc1b4\x2d194ae5d7841e.mount: Deactivated successfully. Jan 14 00:08:18.693874 containerd[2043]: time="2026-01-14T00:08:18.693419701Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:08:18.702858 containerd[2043]: time="2026-01-14T00:08:18.702811000Z" level=error msg="Failed to destroy network for sandbox \"3770962ecd7d8007e4422a74877e42ecf25390e3953914f304ae25e06419b2db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.780579 containerd[2043]: time="2026-01-14T00:08:18.780514195Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7769f7d548-rmgl7,Uid:e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9187b1320b39e122fec6b3d53db648f2eef5c5b2dd76b87e7cd3a06e3763805e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.780752 kubelet[3576]: E0114 00:08:18.780722 3576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9187b1320b39e122fec6b3d53db648f2eef5c5b2dd76b87e7cd3a06e3763805e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.781039 kubelet[3576]: E0114 00:08:18.780773 3576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9187b1320b39e122fec6b3d53db648f2eef5c5b2dd76b87e7cd3a06e3763805e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7769f7d548-rmgl7" Jan 14 00:08:18.781039 kubelet[3576]: E0114 00:08:18.780788 3576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9187b1320b39e122fec6b3d53db648f2eef5c5b2dd76b87e7cd3a06e3763805e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7769f7d548-rmgl7" Jan 14 00:08:18.781039 kubelet[3576]: E0114 00:08:18.780831 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7769f7d548-rmgl7_calico-system(e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7769f7d548-rmgl7_calico-system(e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9187b1320b39e122fec6b3d53db648f2eef5c5b2dd76b87e7cd3a06e3763805e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7769f7d548-rmgl7" podUID="e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb" Jan 14 00:08:18.809792 containerd[2043]: time="2026-01-14T00:08:18.809589543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:08:18.809792 containerd[2043]: time="2026-01-14T00:08:18.809616088Z" level=error msg="Failed to destroy network for sandbox \"71f08ed811a392407a0a1f94079a0d2fd2aeb27095430366f8c5564a8ee3e0a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.811528 containerd[2043]: time="2026-01-14T00:08:18.811495556Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 17.94563625s" Jan 14 00:08:18.812641 containerd[2043]: time="2026-01-14T00:08:18.812616452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 14 00:08:18.813008 containerd[2043]: time="2026-01-14T00:08:18.812087385Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df7698878-xcd87,Uid:703a17f2-f0e0-477c-b942-3e7b76e59fda,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3770962ecd7d8007e4422a74877e42ecf25390e3953914f304ae25e06419b2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.813244 kubelet[3576]: E0114 00:08:18.813177 3576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3770962ecd7d8007e4422a74877e42ecf25390e3953914f304ae25e06419b2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.813244 kubelet[3576]: E0114 00:08:18.813226 3576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3770962ecd7d8007e4422a74877e42ecf25390e3953914f304ae25e06419b2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df7698878-xcd87" Jan 14 00:08:18.813244 kubelet[3576]: E0114 00:08:18.813242 3576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3770962ecd7d8007e4422a74877e42ecf25390e3953914f304ae25e06419b2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df7698878-xcd87" Jan 14 00:08:18.813636 kubelet[3576]: E0114 00:08:18.813292 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5df7698878-xcd87_calico-apiserver(703a17f2-f0e0-477c-b942-3e7b76e59fda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5df7698878-xcd87_calico-apiserver(703a17f2-f0e0-477c-b942-3e7b76e59fda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3770962ecd7d8007e4422a74877e42ecf25390e3953914f304ae25e06419b2db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df7698878-xcd87" podUID="703a17f2-f0e0-477c-b942-3e7b76e59fda" Jan 14 00:08:18.820569 containerd[2043]: time="2026-01-14T00:08:18.820453480Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df7698878-qxjdm,Uid:9bd87eab-d892-4bfa-b953-a8d30659ec75,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"71f08ed811a392407a0a1f94079a0d2fd2aeb27095430366f8c5564a8ee3e0a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.820691 kubelet[3576]: E0114 00:08:18.820647 3576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71f08ed811a392407a0a1f94079a0d2fd2aeb27095430366f8c5564a8ee3e0a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.820742 kubelet[3576]: E0114 00:08:18.820695 3576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71f08ed811a392407a0a1f94079a0d2fd2aeb27095430366f8c5564a8ee3e0a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df7698878-qxjdm" Jan 14 00:08:18.820742 kubelet[3576]: E0114 00:08:18.820710 3576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71f08ed811a392407a0a1f94079a0d2fd2aeb27095430366f8c5564a8ee3e0a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df7698878-qxjdm" Jan 14 00:08:18.821260 kubelet[3576]: E0114 00:08:18.820753 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5df7698878-qxjdm_calico-apiserver(9bd87eab-d892-4bfa-b953-a8d30659ec75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5df7698878-qxjdm_calico-apiserver(9bd87eab-d892-4bfa-b953-a8d30659ec75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71f08ed811a392407a0a1f94079a0d2fd2aeb27095430366f8c5564a8ee3e0a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df7698878-qxjdm" podUID="9bd87eab-d892-4bfa-b953-a8d30659ec75" Jan 14 00:08:18.834174 containerd[2043]: time="2026-01-14T00:08:18.834074357Z" level=info msg="CreateContainer within sandbox \"bcc8546ed7a8e8eb3543893fadadccf43c067279d73c87e4989dd4c1892d9875\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 00:08:18.849841 containerd[2043]: time="2026-01-14T00:08:18.849777669Z" level=error msg="Failed to destroy network for sandbox \"250ea797e741a21d328bd665b3e677f97231dc174ad999326c6cc7cc6168cf2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.860484 containerd[2043]: time="2026-01-14T00:08:18.860437310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f767b88f-ntdwm,Uid:ce38d8d5-b119-4ec5-8427-02101a96fcd0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"250ea797e741a21d328bd665b3e677f97231dc174ad999326c6cc7cc6168cf2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.860812 kubelet[3576]: E0114 00:08:18.860727 3576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"250ea797e741a21d328bd665b3e677f97231dc174ad999326c6cc7cc6168cf2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.860812 kubelet[3576]: E0114 00:08:18.860784 3576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"250ea797e741a21d328bd665b3e677f97231dc174ad999326c6cc7cc6168cf2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79f767b88f-ntdwm" Jan 14 00:08:18.860812 kubelet[3576]: E0114 00:08:18.860801 3576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"250ea797e741a21d328bd665b3e677f97231dc174ad999326c6cc7cc6168cf2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79f767b88f-ntdwm" Jan 14 00:08:18.861094 kubelet[3576]: E0114 00:08:18.860848 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79f767b88f-ntdwm_calico-apiserver(ce38d8d5-b119-4ec5-8427-02101a96fcd0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79f767b88f-ntdwm_calico-apiserver(ce38d8d5-b119-4ec5-8427-02101a96fcd0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"250ea797e741a21d328bd665b3e677f97231dc174ad999326c6cc7cc6168cf2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79f767b88f-ntdwm" podUID="ce38d8d5-b119-4ec5-8427-02101a96fcd0" Jan 14 00:08:18.869460 containerd[2043]: time="2026-01-14T00:08:18.869415675Z" level=info msg="Container d7ce76568f512edc1175ccf0685059fb3e6aec50dbc535fdf409f9fedf351afe: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:08:18.897289 containerd[2043]: time="2026-01-14T00:08:18.897241378Z" level=info msg="CreateContainer within sandbox \"bcc8546ed7a8e8eb3543893fadadccf43c067279d73c87e4989dd4c1892d9875\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d7ce76568f512edc1175ccf0685059fb3e6aec50dbc535fdf409f9fedf351afe\"" Jan 14 00:08:18.900949 containerd[2043]: time="2026-01-14T00:08:18.900895654Z" level=info msg="StartContainer for \"d7ce76568f512edc1175ccf0685059fb3e6aec50dbc535fdf409f9fedf351afe\"" Jan 14 00:08:18.902009 containerd[2043]: time="2026-01-14T00:08:18.901972013Z" level=info msg="connecting to shim d7ce76568f512edc1175ccf0685059fb3e6aec50dbc535fdf409f9fedf351afe" address="unix:///run/containerd/s/bca79658764d2a6b08c15cf2ad51e8692cfa0bf74bd73c51496ad0e3808ea9f6" protocol=ttrpc version=3 Jan 14 00:08:18.907268 containerd[2043]: time="2026-01-14T00:08:18.907199866Z" level=error msg="Failed to destroy network for sandbox \"a7168ce266cd8ee206f999d346a4d6d1ba5cfc2f5a1c9559be1359822bcbb0e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.909511 containerd[2043]: time="2026-01-14T00:08:18.909485997Z" level=error msg="Failed to destroy network for sandbox \"eb063bed18e38817f4f6bb33b66e5e4894b07d3833d68c6e1efdb62088790fa5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.911707 containerd[2043]: time="2026-01-14T00:08:18.911680724Z" level=error msg="Failed to destroy network for sandbox \"c424a9b08c4285b96443fd5ce7bfc739be070c587a3f2c137fa3955d8f12a5cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.914488 containerd[2043]: time="2026-01-14T00:08:18.914442352Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jrsmb,Uid:485f55bc-0719-47ec-b844-40d9b8f86f0d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7168ce266cd8ee206f999d346a4d6d1ba5cfc2f5a1c9559be1359822bcbb0e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.914645 kubelet[3576]: E0114 00:08:18.914609 3576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7168ce266cd8ee206f999d346a4d6d1ba5cfc2f5a1c9559be1359822bcbb0e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.914722 kubelet[3576]: E0114 00:08:18.914659 3576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7168ce266cd8ee206f999d346a4d6d1ba5cfc2f5a1c9559be1359822bcbb0e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jrsmb" Jan 14 00:08:18.914722 kubelet[3576]: E0114 00:08:18.914674 3576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7168ce266cd8ee206f999d346a4d6d1ba5cfc2f5a1c9559be1359822bcbb0e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jrsmb" Jan 14 00:08:18.914773 kubelet[3576]: E0114 00:08:18.914714 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jrsmb_calico-system(485f55bc-0719-47ec-b844-40d9b8f86f0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jrsmb_calico-system(485f55bc-0719-47ec-b844-40d9b8f86f0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7168ce266cd8ee206f999d346a4d6d1ba5cfc2f5a1c9559be1359822bcbb0e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jrsmb" podUID="485f55bc-0719-47ec-b844-40d9b8f86f0d" Jan 14 00:08:18.919798 containerd[2043]: time="2026-01-14T00:08:18.919765881Z" level=error msg="Failed to destroy network for sandbox \"4e91a87d1b77d9ce73208a579fef523d1312f79640466bf095762a73de56ad92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.920194 containerd[2043]: time="2026-01-14T00:08:18.920167239Z" level=error msg="Failed to destroy network for sandbox \"6e6ea6f1eee778bce859d4899c9f7b65fd47326cb826e655ed0ac986f3576bca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.924912 containerd[2043]: time="2026-01-14T00:08:18.924879345Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pf5pb,Uid:293030d0-9140-4eed-bcb6-fdd77ad1a81b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c424a9b08c4285b96443fd5ce7bfc739be070c587a3f2c137fa3955d8f12a5cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.925606 kubelet[3576]: E0114 00:08:18.925556 3576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c424a9b08c4285b96443fd5ce7bfc739be070c587a3f2c137fa3955d8f12a5cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.925701 kubelet[3576]: E0114 00:08:18.925614 3576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c424a9b08c4285b96443fd5ce7bfc739be070c587a3f2c137fa3955d8f12a5cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pf5pb" Jan 14 00:08:18.925701 kubelet[3576]: E0114 00:08:18.925629 3576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c424a9b08c4285b96443fd5ce7bfc739be070c587a3f2c137fa3955d8f12a5cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pf5pb" Jan 14 00:08:18.925771 kubelet[3576]: E0114 00:08:18.925740 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-pf5pb_kube-system(293030d0-9140-4eed-bcb6-fdd77ad1a81b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-pf5pb_kube-system(293030d0-9140-4eed-bcb6-fdd77ad1a81b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c424a9b08c4285b96443fd5ce7bfc739be070c587a3f2c137fa3955d8f12a5cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-pf5pb" podUID="293030d0-9140-4eed-bcb6-fdd77ad1a81b" Jan 14 00:08:18.929281 containerd[2043]: time="2026-01-14T00:08:18.929104786Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7757665449-ss5lr,Uid:0d1fcd1d-b4e7-438b-8969-12e4764b6063,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb063bed18e38817f4f6bb33b66e5e4894b07d3833d68c6e1efdb62088790fa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.929545 kubelet[3576]: E0114 00:08:18.929431 3576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb063bed18e38817f4f6bb33b66e5e4894b07d3833d68c6e1efdb62088790fa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.929545 kubelet[3576]: E0114 00:08:18.929470 3576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb063bed18e38817f4f6bb33b66e5e4894b07d3833d68c6e1efdb62088790fa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7757665449-ss5lr" Jan 14 00:08:18.929545 kubelet[3576]: E0114 00:08:18.929482 3576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb063bed18e38817f4f6bb33b66e5e4894b07d3833d68c6e1efdb62088790fa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7757665449-ss5lr" Jan 14 00:08:18.929640 kubelet[3576]: E0114 00:08:18.929515 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7757665449-ss5lr_calico-system(0d1fcd1d-b4e7-438b-8969-12e4764b6063)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7757665449-ss5lr_calico-system(0d1fcd1d-b4e7-438b-8969-12e4764b6063)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb063bed18e38817f4f6bb33b66e5e4894b07d3833d68c6e1efdb62088790fa5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7757665449-ss5lr" podUID="0d1fcd1d-b4e7-438b-8969-12e4764b6063" Jan 14 00:08:18.936187 systemd[1]: Started cri-containerd-d7ce76568f512edc1175ccf0685059fb3e6aec50dbc535fdf409f9fedf351afe.scope - libcontainer container d7ce76568f512edc1175ccf0685059fb3e6aec50dbc535fdf409f9fedf351afe. Jan 14 00:08:18.950817 containerd[2043]: time="2026-01-14T00:08:18.950624717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9k9wp,Uid:f82b7507-d8a2-4f12-bcc9-0288422aaee6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e91a87d1b77d9ce73208a579fef523d1312f79640466bf095762a73de56ad92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.950981 kubelet[3576]: E0114 00:08:18.950925 3576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e91a87d1b77d9ce73208a579fef523d1312f79640466bf095762a73de56ad92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.951476 kubelet[3576]: E0114 00:08:18.950972 3576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e91a87d1b77d9ce73208a579fef523d1312f79640466bf095762a73de56ad92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9k9wp" Jan 14 00:08:18.951476 kubelet[3576]: E0114 00:08:18.951005 3576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e91a87d1b77d9ce73208a579fef523d1312f79640466bf095762a73de56ad92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9k9wp" Jan 14 00:08:18.951476 kubelet[3576]: E0114 00:08:18.951042 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9k9wp_kube-system(f82b7507-d8a2-4f12-bcc9-0288422aaee6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9k9wp_kube-system(f82b7507-d8a2-4f12-bcc9-0288422aaee6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e91a87d1b77d9ce73208a579fef523d1312f79640466bf095762a73de56ad92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9k9wp" podUID="f82b7507-d8a2-4f12-bcc9-0288422aaee6" Jan 14 00:08:18.953753 containerd[2043]: time="2026-01-14T00:08:18.953704524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zrb5n,Uid:a80d0249-070a-486c-a74c-948bf824745a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e6ea6f1eee778bce859d4899c9f7b65fd47326cb826e655ed0ac986f3576bca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.953902 kubelet[3576]: E0114 00:08:18.953877 3576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e6ea6f1eee778bce859d4899c9f7b65fd47326cb826e655ed0ac986f3576bca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:08:18.953939 kubelet[3576]: E0114 00:08:18.953911 3576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e6ea6f1eee778bce859d4899c9f7b65fd47326cb826e655ed0ac986f3576bca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-zrb5n" Jan 14 00:08:18.953939 kubelet[3576]: E0114 00:08:18.953923 3576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e6ea6f1eee778bce859d4899c9f7b65fd47326cb826e655ed0ac986f3576bca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-zrb5n" Jan 14 00:08:18.954009 kubelet[3576]: E0114 00:08:18.953977 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-zrb5n_calico-system(a80d0249-070a-486c-a74c-948bf824745a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-zrb5n_calico-system(a80d0249-070a-486c-a74c-948bf824745a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e6ea6f1eee778bce859d4899c9f7b65fd47326cb826e655ed0ac986f3576bca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-zrb5n" podUID="a80d0249-070a-486c-a74c-948bf824745a" Jan 14 00:08:18.965000 audit: BPF prog-id=196 op=LOAD Jan 14 00:08:18.965000 audit[4832]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4111 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:18.988340 kernel: audit: type=1334 audit(1768349298.965:592): prog-id=196 op=LOAD Jan 14 00:08:18.988448 kernel: audit: type=1300 audit(1768349298.965:592): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4111 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:18.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437636537363536386635313265646331313735636366303638353035 Jan 14 00:08:19.006588 kernel: audit: type=1327 audit(1768349298.965:592): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437636537363536386635313265646331313735636366303638353035 Jan 14 00:08:18.965000 audit: BPF prog-id=197 op=LOAD Jan 14 00:08:19.011473 kernel: audit: type=1334 audit(1768349298.965:593): prog-id=197 op=LOAD Jan 14 00:08:18.965000 audit[4832]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4111 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:19.030267 kernel: audit: type=1300 audit(1768349298.965:593): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4111 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:18.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437636537363536386635313265646331313735636366303638353035 Jan 14 00:08:19.048347 kernel: audit: type=1327 audit(1768349298.965:593): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437636537363536386635313265646331313735636366303638353035 Jan 14 00:08:19.050056 kernel: audit: type=1334 audit(1768349298.970:594): prog-id=197 op=UNLOAD Jan 14 00:08:18.970000 audit: BPF prog-id=197 op=UNLOAD Jan 14 00:08:18.970000 audit[4832]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4111 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:19.072451 kernel: audit: type=1300 audit(1768349298.970:594): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4111 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:18.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437636537363536386635313265646331313735636366303638353035 Jan 14 00:08:19.090457 kernel: audit: type=1327 audit(1768349298.970:594): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437636537363536386635313265646331313735636366303638353035 Jan 14 00:08:18.970000 audit: BPF prog-id=196 op=UNLOAD Jan 14 00:08:19.096038 kernel: audit: type=1334 audit(1768349298.970:595): prog-id=196 op=UNLOAD Jan 14 00:08:18.970000 audit[4832]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4111 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:18.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437636537363536386635313265646331313735636366303638353035 Jan 14 00:08:18.970000 audit: BPF prog-id=198 op=LOAD Jan 14 00:08:18.970000 audit[4832]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4111 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:18.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437636537363536386635313265646331313735636366303638353035 Jan 14 00:08:19.113723 containerd[2043]: time="2026-01-14T00:08:19.113690240Z" level=info msg="StartContainer for \"d7ce76568f512edc1175ccf0685059fb3e6aec50dbc535fdf409f9fedf351afe\" returns successfully" Jan 14 00:08:19.204602 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 00:08:19.204727 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 00:08:19.352290 systemd[1]: run-netns-cni\x2da9a600b2\x2d7997\x2d8262\x2d58fc\x2d6312956d53df.mount: Deactivated successfully. Jan 14 00:08:19.352366 systemd[1]: run-netns-cni\x2d65bb01cc\x2dfb66\x2dad22\x2d39c9\x2d59b4edecb0f6.mount: Deactivated successfully. Jan 14 00:08:19.352400 systemd[1]: run-netns-cni\x2d38da7def\x2d6585\x2dd245\x2dcfb2\x2dc4677ac0920c.mount: Deactivated successfully. Jan 14 00:08:19.386452 kubelet[3576]: I0114 00:08:19.386361 3576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb-whisker-ca-bundle\") pod \"e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb\" (UID: \"e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb\") " Jan 14 00:08:19.387441 kubelet[3576]: I0114 00:08:19.387423 3576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb-whisker-backend-key-pair\") pod \"e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb\" (UID: \"e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb\") " Jan 14 00:08:19.388136 kubelet[3576]: I0114 00:08:19.388117 3576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc2gq\" (UniqueName: \"kubernetes.io/projected/e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb-kube-api-access-kc2gq\") pod \"e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb\" (UID: \"e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb\") " Jan 14 00:08:19.390096 kubelet[3576]: I0114 00:08:19.387353 3576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb" (UID: "e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 00:08:19.394215 systemd[1]: var-lib-kubelet-pods-e1d69d8a\x2ddfca\x2d48bb\x2da4a5\x2de96ff97c8dcb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkc2gq.mount: Deactivated successfully. Jan 14 00:08:19.394313 systemd[1]: var-lib-kubelet-pods-e1d69d8a\x2ddfca\x2d48bb\x2da4a5\x2de96ff97c8dcb-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 00:08:19.396066 kubelet[3576]: I0114 00:08:19.394940 3576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb-kube-api-access-kc2gq" (OuterVolumeSpecName: "kube-api-access-kc2gq") pod "e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb" (UID: "e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb"). InnerVolumeSpecName "kube-api-access-kc2gq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 00:08:19.397445 kubelet[3576]: I0114 00:08:19.396873 3576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb" (UID: "e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 00:08:19.489457 kubelet[3576]: I0114 00:08:19.489417 3576 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb-whisker-backend-key-pair\") on node \"ci-4547.0.0-n-16ff4e9fd7\" DevicePath \"\"" Jan 14 00:08:19.489457 kubelet[3576]: I0114 00:08:19.489453 3576 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kc2gq\" (UniqueName: \"kubernetes.io/projected/e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb-kube-api-access-kc2gq\") on node \"ci-4547.0.0-n-16ff4e9fd7\" DevicePath \"\"" Jan 14 00:08:19.489457 kubelet[3576]: I0114 00:08:19.489460 3576 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb-whisker-ca-bundle\") on node \"ci-4547.0.0-n-16ff4e9fd7\" DevicePath \"\"" Jan 14 00:08:19.738223 systemd[1]: Removed slice kubepods-besteffort-pode1d69d8a_dfca_48bb_a4a5_e96ff97c8dcb.slice - libcontainer container kubepods-besteffort-pode1d69d8a_dfca_48bb_a4a5_e96ff97c8dcb.slice. Jan 14 00:08:19.940155 kubelet[3576]: I0114 00:08:19.940069 3576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kqctj" podStartSLOduration=1.772254261 podStartE2EDuration="29.940052645s" podCreationTimestamp="2026-01-14 00:07:50 +0000 UTC" firstStartedPulling="2026-01-14 00:07:50.648962546 +0000 UTC m=+25.027364745" lastFinishedPulling="2026-01-14 00:08:18.816760946 +0000 UTC m=+53.195163129" observedRunningTime="2026-01-14 00:08:19.92553572 +0000 UTC m=+54.303937903" watchObservedRunningTime="2026-01-14 00:08:19.940052645 +0000 UTC m=+54.318454828" Jan 14 00:08:20.027837 systemd[1]: Created slice kubepods-besteffort-podca6326e0_1651_4ad3_9a4d_868c20d45353.slice - libcontainer container kubepods-besteffort-podca6326e0_1651_4ad3_9a4d_868c20d45353.slice. Jan 14 00:08:20.094123 kubelet[3576]: I0114 00:08:20.094077 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vf22\" (UniqueName: \"kubernetes.io/projected/ca6326e0-1651-4ad3-9a4d-868c20d45353-kube-api-access-6vf22\") pod \"whisker-79587cb578-7tgbl\" (UID: \"ca6326e0-1651-4ad3-9a4d-868c20d45353\") " pod="calico-system/whisker-79587cb578-7tgbl" Jan 14 00:08:20.094123 kubelet[3576]: I0114 00:08:20.094127 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca6326e0-1651-4ad3-9a4d-868c20d45353-whisker-ca-bundle\") pod \"whisker-79587cb578-7tgbl\" (UID: \"ca6326e0-1651-4ad3-9a4d-868c20d45353\") " pod="calico-system/whisker-79587cb578-7tgbl" Jan 14 00:08:20.094307 kubelet[3576]: I0114 00:08:20.094150 3576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ca6326e0-1651-4ad3-9a4d-868c20d45353-whisker-backend-key-pair\") pod \"whisker-79587cb578-7tgbl\" (UID: \"ca6326e0-1651-4ad3-9a4d-868c20d45353\") " pod="calico-system/whisker-79587cb578-7tgbl" Jan 14 00:08:20.336972 containerd[2043]: time="2026-01-14T00:08:20.336853014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79587cb578-7tgbl,Uid:ca6326e0-1651-4ad3-9a4d-868c20d45353,Namespace:calico-system,Attempt:0,}" Jan 14 00:08:20.476556 systemd-networkd[1621]: cali81bce496030: Link UP Jan 14 00:08:20.478513 systemd-networkd[1621]: cali81bce496030: Gained carrier Jan 14 00:08:20.498541 containerd[2043]: 2026-01-14 00:08:20.364 [INFO][4894] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 00:08:20.498541 containerd[2043]: 2026-01-14 00:08:20.406 [INFO][4894] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--16ff4e9fd7-k8s-whisker--79587cb578--7tgbl-eth0 whisker-79587cb578- calico-system ca6326e0-1651-4ad3-9a4d-868c20d45353 935 0 2026-01-14 00:08:19 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79587cb578 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547.0.0-n-16ff4e9fd7 whisker-79587cb578-7tgbl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali81bce496030 [] [] }} ContainerID="d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1" Namespace="calico-system" Pod="whisker-79587cb578-7tgbl" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-whisker--79587cb578--7tgbl-" Jan 14 00:08:20.498541 containerd[2043]: 2026-01-14 00:08:20.407 [INFO][4894] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1" Namespace="calico-system" Pod="whisker-79587cb578-7tgbl" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-whisker--79587cb578--7tgbl-eth0" Jan 14 00:08:20.498541 containerd[2043]: 2026-01-14 00:08:20.425 [INFO][4906] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1" HandleID="k8s-pod-network.d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-whisker--79587cb578--7tgbl-eth0" Jan 14 00:08:20.498751 containerd[2043]: 2026-01-14 00:08:20.425 [INFO][4906] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1" HandleID="k8s-pod-network.d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-whisker--79587cb578--7tgbl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-n-16ff4e9fd7", "pod":"whisker-79587cb578-7tgbl", "timestamp":"2026-01-14 00:08:20.425493637 +0000 UTC"}, Hostname:"ci-4547.0.0-n-16ff4e9fd7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:08:20.498751 containerd[2043]: 2026-01-14 00:08:20.425 [INFO][4906] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:08:20.498751 containerd[2043]: 2026-01-14 00:08:20.425 [INFO][4906] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:08:20.498751 containerd[2043]: 2026-01-14 00:08:20.425 [INFO][4906] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-16ff4e9fd7' Jan 14 00:08:20.498751 containerd[2043]: 2026-01-14 00:08:20.431 [INFO][4906] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:20.498751 containerd[2043]: 2026-01-14 00:08:20.434 [INFO][4906] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:20.498751 containerd[2043]: 2026-01-14 00:08:20.438 [INFO][4906] ipam/ipam.go 511: Trying affinity for 192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:20.498751 containerd[2043]: 2026-01-14 00:08:20.440 [INFO][4906] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:20.498751 containerd[2043]: 2026-01-14 00:08:20.441 [INFO][4906] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:20.498918 containerd[2043]: 2026-01-14 00:08:20.441 [INFO][4906] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:20.498918 containerd[2043]: 2026-01-14 00:08:20.443 [INFO][4906] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1 Jan 14 00:08:20.498918 containerd[2043]: 2026-01-14 00:08:20.449 [INFO][4906] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:20.498918 containerd[2043]: 2026-01-14 00:08:20.462 [INFO][4906] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.1/26] block=192.168.44.0/26 handle="k8s-pod-network.d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:20.498918 containerd[2043]: 2026-01-14 00:08:20.462 [INFO][4906] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.1/26] handle="k8s-pod-network.d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:20.498918 containerd[2043]: 2026-01-14 00:08:20.462 [INFO][4906] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:08:20.498918 containerd[2043]: 2026-01-14 00:08:20.462 [INFO][4906] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.1/26] IPv6=[] ContainerID="d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1" HandleID="k8s-pod-network.d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-whisker--79587cb578--7tgbl-eth0" Jan 14 00:08:20.499049 containerd[2043]: 2026-01-14 00:08:20.464 [INFO][4894] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1" Namespace="calico-system" Pod="whisker-79587cb578-7tgbl" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-whisker--79587cb578--7tgbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--16ff4e9fd7-k8s-whisker--79587cb578--7tgbl-eth0", GenerateName:"whisker-79587cb578-", Namespace:"calico-system", SelfLink:"", UID:"ca6326e0-1651-4ad3-9a4d-868c20d45353", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 8, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79587cb578", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-16ff4e9fd7", ContainerID:"", Pod:"whisker-79587cb578-7tgbl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.44.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali81bce496030", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:08:20.499049 containerd[2043]: 2026-01-14 00:08:20.464 [INFO][4894] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.1/32] ContainerID="d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1" Namespace="calico-system" Pod="whisker-79587cb578-7tgbl" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-whisker--79587cb578--7tgbl-eth0" Jan 14 00:08:20.499104 containerd[2043]: 2026-01-14 00:08:20.465 [INFO][4894] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81bce496030 ContainerID="d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1" Namespace="calico-system" Pod="whisker-79587cb578-7tgbl" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-whisker--79587cb578--7tgbl-eth0" Jan 14 00:08:20.499104 containerd[2043]: 2026-01-14 00:08:20.478 [INFO][4894] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1" Namespace="calico-system" Pod="whisker-79587cb578-7tgbl" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-whisker--79587cb578--7tgbl-eth0" Jan 14 00:08:20.499132 containerd[2043]: 2026-01-14 00:08:20.479 [INFO][4894] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1" Namespace="calico-system" Pod="whisker-79587cb578-7tgbl" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-whisker--79587cb578--7tgbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--16ff4e9fd7-k8s-whisker--79587cb578--7tgbl-eth0", GenerateName:"whisker-79587cb578-", Namespace:"calico-system", SelfLink:"", UID:"ca6326e0-1651-4ad3-9a4d-868c20d45353", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 8, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79587cb578", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-16ff4e9fd7", ContainerID:"d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1", Pod:"whisker-79587cb578-7tgbl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.44.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali81bce496030", MAC:"4e:3c:aa:c8:ac:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:08:20.499165 containerd[2043]: 2026-01-14 00:08:20.496 [INFO][4894] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1" Namespace="calico-system" Pod="whisker-79587cb578-7tgbl" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-whisker--79587cb578--7tgbl-eth0" Jan 14 00:08:20.537930 containerd[2043]: time="2026-01-14T00:08:20.537885955Z" level=info msg="connecting to shim d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1" address="unix:///run/containerd/s/c8bc12ee27d44bb8ff0ccfdd47ab75c2fc52d4ec46437201d91593b8251a78be" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:08:20.560710 systemd[1]: Started cri-containerd-d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1.scope - libcontainer container d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1. Jan 14 00:08:20.576000 audit: BPF prog-id=199 op=LOAD Jan 14 00:08:20.577000 audit: BPF prog-id=200 op=LOAD Jan 14 00:08:20.577000 audit[4941]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=4930 pid=4941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:20.577000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431383261626331636133616136303262333662656365646337376439 Jan 14 00:08:20.577000 audit: BPF prog-id=200 op=UNLOAD Jan 14 00:08:20.577000 audit[4941]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4930 pid=4941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:20.577000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431383261626331636133616136303262333662656365646337376439 Jan 14 00:08:20.578000 audit: BPF prog-id=201 op=LOAD Jan 14 00:08:20.578000 audit[4941]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=4930 pid=4941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:20.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431383261626331636133616136303262333662656365646337376439 Jan 14 00:08:20.578000 audit: BPF prog-id=202 op=LOAD Jan 14 00:08:20.578000 audit[4941]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=4930 pid=4941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:20.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431383261626331636133616136303262333662656365646337376439 Jan 14 00:08:20.578000 audit: BPF prog-id=202 op=UNLOAD Jan 14 00:08:20.578000 audit[4941]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4930 pid=4941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:20.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431383261626331636133616136303262333662656365646337376439 Jan 14 00:08:20.578000 audit: BPF prog-id=201 op=UNLOAD Jan 14 00:08:20.578000 audit[4941]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4930 pid=4941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:20.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431383261626331636133616136303262333662656365646337376439 Jan 14 00:08:20.578000 audit: BPF prog-id=203 op=LOAD Jan 14 00:08:20.578000 audit[4941]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=4930 pid=4941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:20.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431383261626331636133616136303262333662656365646337376439 Jan 14 00:08:20.652409 containerd[2043]: time="2026-01-14T00:08:20.652287807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79587cb578-7tgbl,Uid:ca6326e0-1651-4ad3-9a4d-868c20d45353,Namespace:calico-system,Attempt:0,} returns sandbox id \"d182abc1ca3aa602b36becedc77d93c17a7918b7ec0e78d697eb199dde134ed1\"" Jan 14 00:08:20.655846 containerd[2043]: time="2026-01-14T00:08:20.655809290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 00:08:20.919798 containerd[2043]: time="2026-01-14T00:08:20.919626905Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:08:20.924645 containerd[2043]: time="2026-01-14T00:08:20.924565098Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 00:08:20.924964 containerd[2043]: time="2026-01-14T00:08:20.924632700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 00:08:20.925218 kubelet[3576]: E0114 00:08:20.925075 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:08:20.925312 kubelet[3576]: E0114 00:08:20.925233 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:08:20.929179 kubelet[3576]: E0114 00:08:20.929124 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-79587cb578-7tgbl_calico-system(ca6326e0-1651-4ad3-9a4d-868c20d45353): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 00:08:20.930889 containerd[2043]: time="2026-01-14T00:08:20.930532157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 00:08:20.937000 audit: BPF prog-id=204 op=LOAD Jan 14 00:08:20.937000 audit[5094]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff55c9b78 a2=98 a3=fffff55c9b68 items=0 ppid=5000 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:20.937000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:08:20.937000 audit: BPF prog-id=204 op=UNLOAD Jan 14 00:08:20.937000 audit[5094]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff55c9b48 a3=0 items=0 ppid=5000 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:20.937000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:08:20.937000 audit: BPF prog-id=205 op=LOAD Jan 14 00:08:20.937000 audit[5094]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff55c9a28 a2=74 a3=95 items=0 ppid=5000 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:20.937000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:08:20.937000 audit: BPF prog-id=205 op=UNLOAD Jan 14 00:08:20.937000 audit[5094]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5000 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:20.937000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:08:20.937000 audit: BPF prog-id=206 op=LOAD Jan 14 00:08:20.937000 audit[5094]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff55c9a58 a2=40 a3=fffff55c9a88 items=0 ppid=5000 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:20.937000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:08:20.937000 audit: BPF prog-id=206 op=UNLOAD Jan 14 00:08:20.937000 audit[5094]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff55c9a88 items=0 ppid=5000 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:20.937000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:08:20.939000 audit: BPF prog-id=207 op=LOAD Jan 14 00:08:20.939000 audit[5095]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffef9350a8 a2=98 a3=ffffef935098 items=0 ppid=5000 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:20.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:08:20.939000 audit: BPF prog-id=207 op=UNLOAD Jan 14 00:08:20.939000 audit[5095]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffef935078 a3=0 items=0 ppid=5000 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:20.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:08:20.939000 audit: BPF prog-id=208 op=LOAD Jan 14 00:08:20.939000 audit[5095]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffef934d38 a2=74 a3=95 items=0 ppid=5000 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:20.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:08:20.939000 audit: BPF prog-id=208 op=UNLOAD Jan 14 00:08:20.939000 audit[5095]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=5000 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:20.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:08:20.939000 audit: BPF prog-id=209 op=LOAD Jan 14 00:08:20.939000 audit[5095]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffef934d98 a2=94 a3=2 items=0 ppid=5000 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:20.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:08:20.939000 audit: BPF prog-id=209 op=UNLOAD Jan 14 00:08:20.939000 audit[5095]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=5000 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:20.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:08:21.019000 audit: BPF prog-id=210 op=LOAD Jan 14 00:08:21.019000 audit[5095]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffef934d58 a2=40 a3=ffffef934d88 items=0 ppid=5000 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.019000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:08:21.019000 audit: BPF prog-id=210 op=UNLOAD Jan 14 00:08:21.019000 audit[5095]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffef934d88 items=0 ppid=5000 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.019000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:08:21.025000 audit: BPF prog-id=211 op=LOAD Jan 14 00:08:21.025000 audit[5095]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffef934d68 a2=94 a3=4 items=0 ppid=5000 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.025000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:08:21.026000 audit: BPF prog-id=211 op=UNLOAD Jan 14 00:08:21.026000 audit[5095]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=5000 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:08:21.026000 audit: BPF prog-id=212 op=LOAD Jan 14 00:08:21.026000 audit[5095]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffef934ba8 a2=94 a3=5 items=0 ppid=5000 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:08:21.026000 audit: BPF prog-id=212 op=UNLOAD Jan 14 00:08:21.026000 audit[5095]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=5000 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:08:21.026000 audit: BPF prog-id=213 op=LOAD Jan 14 00:08:21.026000 audit[5095]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffef934dd8 a2=94 a3=6 items=0 ppid=5000 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:08:21.026000 audit: BPF prog-id=213 op=UNLOAD Jan 14 00:08:21.026000 audit[5095]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=5000 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:08:21.026000 audit: BPF prog-id=214 op=LOAD Jan 14 00:08:21.026000 audit[5095]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffef9345a8 a2=94 a3=83 items=0 ppid=5000 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:08:21.026000 audit: BPF prog-id=215 op=LOAD Jan 14 00:08:21.026000 audit[5095]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffef934368 a2=94 a3=2 items=0 ppid=5000 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:08:21.026000 audit: BPF prog-id=215 op=UNLOAD Jan 14 00:08:21.026000 audit[5095]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=5000 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:08:21.027000 audit: BPF prog-id=214 op=UNLOAD Jan 14 00:08:21.027000 audit[5095]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=12a7f620 a3=12a72b00 items=0 ppid=5000 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.027000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:08:21.033000 audit: BPF prog-id=216 op=LOAD Jan 14 00:08:21.033000 audit[5098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff33d0d08 a2=98 a3=fffff33d0cf8 items=0 ppid=5000 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.033000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:08:21.033000 audit: BPF prog-id=216 op=UNLOAD Jan 14 00:08:21.033000 audit[5098]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff33d0cd8 a3=0 items=0 ppid=5000 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.033000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:08:21.033000 audit: BPF prog-id=217 op=LOAD Jan 14 00:08:21.033000 audit[5098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff33d0bb8 a2=74 a3=95 items=0 ppid=5000 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.033000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:08:21.033000 audit: BPF prog-id=217 op=UNLOAD Jan 14 00:08:21.033000 audit[5098]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5000 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.033000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:08:21.033000 audit: BPF prog-id=218 op=LOAD Jan 14 00:08:21.033000 audit[5098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff33d0be8 a2=40 a3=fffff33d0c18 items=0 ppid=5000 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.033000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:08:21.033000 audit: BPF prog-id=218 op=UNLOAD Jan 14 00:08:21.033000 audit[5098]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff33d0c18 items=0 ppid=5000 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.033000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:08:21.077989 systemd-networkd[1621]: vxlan.calico: Link UP Jan 14 00:08:21.078010 systemd-networkd[1621]: vxlan.calico: Gained carrier Jan 14 00:08:21.100000 audit: BPF prog-id=219 op=LOAD Jan 14 00:08:21.100000 audit[5125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc034cd28 a2=98 a3=ffffc034cd18 items=0 ppid=5000 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.100000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:08:21.101000 audit: BPF prog-id=219 op=UNLOAD Jan 14 00:08:21.101000 audit[5125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc034ccf8 a3=0 items=0 ppid=5000 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.101000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:08:21.101000 audit: BPF prog-id=220 op=LOAD Jan 14 00:08:21.101000 audit[5125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc034ca08 a2=74 a3=95 items=0 ppid=5000 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.101000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:08:21.102000 audit: BPF prog-id=220 op=UNLOAD Jan 14 00:08:21.102000 audit[5125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5000 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.102000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:08:21.102000 audit: BPF prog-id=221 op=LOAD Jan 14 00:08:21.102000 audit[5125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc034ca68 a2=94 a3=2 items=0 ppid=5000 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.102000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:08:21.102000 audit: BPF prog-id=221 op=UNLOAD Jan 14 00:08:21.102000 audit[5125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=5000 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.102000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:08:21.102000 audit: BPF prog-id=222 op=LOAD Jan 14 00:08:21.102000 audit[5125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc034c8e8 a2=40 a3=ffffc034c918 items=0 ppid=5000 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.102000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:08:21.102000 audit: BPF prog-id=222 op=UNLOAD Jan 14 00:08:21.102000 audit[5125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffc034c918 items=0 ppid=5000 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.102000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:08:21.102000 audit: BPF prog-id=223 op=LOAD Jan 14 00:08:21.102000 audit[5125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc034ca38 a2=94 a3=b7 items=0 ppid=5000 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.102000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:08:21.102000 audit: BPF prog-id=223 op=UNLOAD Jan 14 00:08:21.102000 audit[5125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=5000 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.102000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:08:21.103000 audit: BPF prog-id=224 op=LOAD Jan 14 00:08:21.103000 audit[5125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc034c0e8 a2=94 a3=2 items=0 ppid=5000 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.103000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:08:21.103000 audit: BPF prog-id=224 op=UNLOAD Jan 14 00:08:21.103000 audit[5125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=5000 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.103000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:08:21.103000 audit: BPF prog-id=225 op=LOAD Jan 14 00:08:21.103000 audit[5125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc034c278 a2=94 a3=30 items=0 ppid=5000 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.103000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:08:21.108000 audit: BPF prog-id=226 op=LOAD Jan 14 00:08:21.108000 audit[5130]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff42865b8 a2=98 a3=fffff42865a8 items=0 ppid=5000 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.108000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:08:21.108000 audit: BPF prog-id=226 op=UNLOAD Jan 14 00:08:21.108000 audit[5130]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff4286588 a3=0 items=0 ppid=5000 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.108000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:08:21.109000 audit: BPF prog-id=227 op=LOAD Jan 14 00:08:21.109000 audit[5130]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff4286248 a2=74 a3=95 items=0 ppid=5000 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:08:21.110000 audit: BPF prog-id=227 op=UNLOAD Jan 14 00:08:21.110000 audit[5130]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=5000 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.110000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:08:21.110000 audit: BPF prog-id=228 op=LOAD Jan 14 00:08:21.110000 audit[5130]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff42862a8 a2=94 a3=2 items=0 ppid=5000 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.110000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:08:21.110000 audit: BPF prog-id=228 op=UNLOAD Jan 14 00:08:21.110000 audit[5130]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=5000 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.110000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:08:21.195000 audit: BPF prog-id=229 op=LOAD Jan 14 00:08:21.195000 audit[5130]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff4286268 a2=40 a3=fffff4286298 items=0 ppid=5000 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.195000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:08:21.196000 audit: BPF prog-id=229 op=UNLOAD Jan 14 00:08:21.196000 audit[5130]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffff4286298 items=0 ppid=5000 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:08:21.202000 audit: BPF prog-id=230 op=LOAD Jan 14 00:08:21.202000 audit[5130]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff4286278 a2=94 a3=4 items=0 ppid=5000 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.202000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:08:21.202000 audit: BPF prog-id=230 op=UNLOAD Jan 14 00:08:21.202000 audit[5130]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=5000 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.202000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:08:21.203000 audit: BPF prog-id=231 op=LOAD Jan 14 00:08:21.203000 audit[5130]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff42860b8 a2=94 a3=5 items=0 ppid=5000 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.203000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:08:21.203000 audit: BPF prog-id=231 op=UNLOAD Jan 14 00:08:21.203000 audit[5130]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=5000 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.203000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:08:21.203000 audit: BPF prog-id=232 op=LOAD Jan 14 00:08:21.203000 audit[5130]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff42862e8 a2=94 a3=6 items=0 ppid=5000 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.203000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:08:21.203000 audit: BPF prog-id=232 op=UNLOAD Jan 14 00:08:21.203000 audit[5130]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=5000 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.203000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:08:21.203000 audit: BPF prog-id=233 op=LOAD Jan 14 00:08:21.203000 audit[5130]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff4285ab8 a2=94 a3=83 items=0 ppid=5000 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.203000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:08:21.203000 audit: BPF prog-id=234 op=LOAD Jan 14 00:08:21.203000 audit[5130]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffff4285878 a2=94 a3=2 items=0 ppid=5000 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.203000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:08:21.203000 audit: BPF prog-id=234 op=UNLOAD Jan 14 00:08:21.203000 audit[5130]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=5000 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.203000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:08:21.204000 audit: BPF prog-id=233 op=UNLOAD Jan 14 00:08:21.204000 audit[5130]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=3cb10620 a3=3cb03b00 items=0 ppid=5000 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.204000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:08:21.212000 audit: BPF prog-id=225 op=UNLOAD Jan 14 00:08:21.212000 audit[5000]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000944340 a2=0 a3=0 items=0 ppid=4970 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.212000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 00:08:21.219755 containerd[2043]: time="2026-01-14T00:08:21.219712375Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:08:21.222765 containerd[2043]: time="2026-01-14T00:08:21.222709145Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 00:08:21.223573 containerd[2043]: time="2026-01-14T00:08:21.222823356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 00:08:21.223647 kubelet[3576]: E0114 00:08:21.223025 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:08:21.223647 kubelet[3576]: E0114 00:08:21.223075 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:08:21.223647 kubelet[3576]: E0114 00:08:21.223155 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-79587cb578-7tgbl_calico-system(ca6326e0-1651-4ad3-9a4d-868c20d45353): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 00:08:21.228752 kubelet[3576]: E0114 00:08:21.228700 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79587cb578-7tgbl" podUID="ca6326e0-1651-4ad3-9a4d-868c20d45353" Jan 14 00:08:21.260000 audit[5158]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=5158 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:08:21.260000 audit[5158]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffe82747b0 a2=0 a3=ffff90aaffa8 items=0 ppid=5000 pid=5158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.260000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:08:21.264000 audit[5156]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=5156 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:08:21.264000 audit[5156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffd05f9020 a2=0 a3=ffff85daefa8 items=0 ppid=5000 pid=5156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.264000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:08:21.273000 audit[5157]: NETFILTER_CFG table=raw:124 family=2 entries=21 op=nft_register_chain pid=5157 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:08:21.273000 audit[5157]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffc7c640e0 a2=0 a3=ffffb0500fa8 items=0 ppid=5000 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.273000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:08:21.273000 audit[5159]: NETFILTER_CFG table=filter:125 family=2 entries=94 op=nft_register_chain pid=5159 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:08:21.273000 audit[5159]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffc9d7a2e0 a2=0 a3=ffffb84b5fa8 items=0 ppid=5000 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.273000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:08:21.735180 kubelet[3576]: I0114 00:08:21.735141 3576 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb" path="/var/lib/kubelet/pods/e1d69d8a-dfca-48bb-a4a5-e96ff97c8dcb/volumes" Jan 14 00:08:21.907469 kubelet[3576]: E0114 00:08:21.907129 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79587cb578-7tgbl" podUID="ca6326e0-1651-4ad3-9a4d-868c20d45353" Jan 14 00:08:21.936000 audit[5173]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=5173 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:08:21.936000 audit[5173]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcb4344e0 a2=0 a3=1 items=0 ppid=3727 pid=5173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.936000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:08:21.941000 audit[5173]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=5173 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:08:21.941000 audit[5173]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffcb4344e0 a2=0 a3=1 items=0 ppid=3727 pid=5173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:21.941000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:08:22.162496 systemd-networkd[1621]: cali81bce496030: Gained IPv6LL Jan 14 00:08:22.418583 systemd-networkd[1621]: vxlan.calico: Gained IPv6LL Jan 14 00:08:29.740442 containerd[2043]: time="2026-01-14T00:08:29.740400842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pf5pb,Uid:293030d0-9140-4eed-bcb6-fdd77ad1a81b,Namespace:kube-system,Attempt:0,}" Jan 14 00:08:29.746974 containerd[2043]: time="2026-01-14T00:08:29.746935646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7757665449-ss5lr,Uid:0d1fcd1d-b4e7-438b-8969-12e4764b6063,Namespace:calico-system,Attempt:0,}" Jan 14 00:08:29.866841 systemd-networkd[1621]: calif7dc8709255: Link UP Jan 14 00:08:29.867815 systemd-networkd[1621]: calif7dc8709255: Gained carrier Jan 14 00:08:29.883285 containerd[2043]: 2026-01-14 00:08:29.795 [INFO][5186] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--pf5pb-eth0 coredns-66bc5c9577- kube-system 293030d0-9140-4eed-bcb6-fdd77ad1a81b 835 0 2026-01-14 00:07:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-n-16ff4e9fd7 coredns-66bc5c9577-pf5pb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif7dc8709255 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a" Namespace="kube-system" Pod="coredns-66bc5c9577-pf5pb" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--pf5pb-" Jan 14 00:08:29.883285 containerd[2043]: 2026-01-14 00:08:29.795 [INFO][5186] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a" Namespace="kube-system" Pod="coredns-66bc5c9577-pf5pb" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--pf5pb-eth0" Jan 14 00:08:29.883285 containerd[2043]: 2026-01-14 00:08:29.827 [INFO][5209] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a" HandleID="k8s-pod-network.8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--pf5pb-eth0" Jan 14 00:08:29.883465 containerd[2043]: 2026-01-14 00:08:29.827 [INFO][5209] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a" HandleID="k8s-pod-network.8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--pf5pb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab3a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-n-16ff4e9fd7", "pod":"coredns-66bc5c9577-pf5pb", "timestamp":"2026-01-14 00:08:29.827786194 +0000 UTC"}, Hostname:"ci-4547.0.0-n-16ff4e9fd7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:08:29.883465 containerd[2043]: 2026-01-14 00:08:29.827 [INFO][5209] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:08:29.883465 containerd[2043]: 2026-01-14 00:08:29.827 [INFO][5209] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:08:29.883465 containerd[2043]: 2026-01-14 00:08:29.828 [INFO][5209] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-16ff4e9fd7' Jan 14 00:08:29.883465 containerd[2043]: 2026-01-14 00:08:29.833 [INFO][5209] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:29.883465 containerd[2043]: 2026-01-14 00:08:29.837 [INFO][5209] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:29.883465 containerd[2043]: 2026-01-14 00:08:29.841 [INFO][5209] ipam/ipam.go 511: Trying affinity for 192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:29.883465 containerd[2043]: 2026-01-14 00:08:29.842 [INFO][5209] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:29.883465 containerd[2043]: 2026-01-14 00:08:29.844 [INFO][5209] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:29.883602 containerd[2043]: 2026-01-14 00:08:29.844 [INFO][5209] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:29.883602 containerd[2043]: 2026-01-14 00:08:29.845 [INFO][5209] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a Jan 14 00:08:29.883602 containerd[2043]: 2026-01-14 00:08:29.849 [INFO][5209] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:29.883602 containerd[2043]: 2026-01-14 00:08:29.857 [INFO][5209] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.2/26] block=192.168.44.0/26 handle="k8s-pod-network.8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:29.883602 containerd[2043]: 2026-01-14 00:08:29.857 [INFO][5209] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.2/26] handle="k8s-pod-network.8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:29.883602 containerd[2043]: 2026-01-14 00:08:29.857 [INFO][5209] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:08:29.883602 containerd[2043]: 2026-01-14 00:08:29.858 [INFO][5209] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.2/26] IPv6=[] ContainerID="8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a" HandleID="k8s-pod-network.8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--pf5pb-eth0" Jan 14 00:08:29.883694 containerd[2043]: 2026-01-14 00:08:29.860 [INFO][5186] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a" Namespace="kube-system" Pod="coredns-66bc5c9577-pf5pb" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--pf5pb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--pf5pb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"293030d0-9140-4eed-bcb6-fdd77ad1a81b", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 7, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-16ff4e9fd7", ContainerID:"", Pod:"coredns-66bc5c9577-pf5pb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7dc8709255", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:08:29.883694 containerd[2043]: 2026-01-14 00:08:29.860 [INFO][5186] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.2/32] ContainerID="8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a" Namespace="kube-system" Pod="coredns-66bc5c9577-pf5pb" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--pf5pb-eth0" Jan 14 00:08:29.883694 containerd[2043]: 2026-01-14 00:08:29.860 [INFO][5186] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7dc8709255 ContainerID="8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a" Namespace="kube-system" Pod="coredns-66bc5c9577-pf5pb" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--pf5pb-eth0" Jan 14 00:08:29.883694 containerd[2043]: 2026-01-14 00:08:29.868 [INFO][5186] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a" Namespace="kube-system" Pod="coredns-66bc5c9577-pf5pb" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--pf5pb-eth0" Jan 14 00:08:29.883694 containerd[2043]: 2026-01-14 00:08:29.868 [INFO][5186] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a" Namespace="kube-system" Pod="coredns-66bc5c9577-pf5pb" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--pf5pb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--pf5pb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"293030d0-9140-4eed-bcb6-fdd77ad1a81b", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 7, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-16ff4e9fd7", ContainerID:"8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a", Pod:"coredns-66bc5c9577-pf5pb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7dc8709255", MAC:"3e:4d:d0:05:bf:4c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:08:29.883810 containerd[2043]: 2026-01-14 00:08:29.879 [INFO][5186] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a" Namespace="kube-system" Pod="coredns-66bc5c9577-pf5pb" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--pf5pb-eth0" Jan 14 00:08:29.893000 audit[5234]: NETFILTER_CFG table=filter:128 family=2 entries=42 op=nft_register_chain pid=5234 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:08:29.898459 kernel: kauditd_printk_skb: 231 callbacks suppressed Jan 14 00:08:29.898527 kernel: audit: type=1325 audit(1768349309.893:673): table=filter:128 family=2 entries=42 op=nft_register_chain pid=5234 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:08:29.893000 audit[5234]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=ffffc190cb70 a2=0 a3=ffffbbacafa8 items=0 ppid=5000 pid=5234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:29.927610 kernel: audit: type=1300 audit(1768349309.893:673): arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=ffffc190cb70 a2=0 a3=ffffbbacafa8 items=0 ppid=5000 pid=5234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:29.893000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:08:29.940625 kernel: audit: type=1327 audit(1768349309.893:673): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:08:29.985411 systemd-networkd[1621]: cali3fe9d74e896: Link UP Jan 14 00:08:29.986231 systemd-networkd[1621]: cali3fe9d74e896: Gained carrier Jan 14 00:08:30.008025 containerd[2043]: 2026-01-14 00:08:29.807 [INFO][5190] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--16ff4e9fd7-k8s-calico--kube--controllers--7757665449--ss5lr-eth0 calico-kube-controllers-7757665449- calico-system 0d1fcd1d-b4e7-438b-8969-12e4764b6063 832 0 2026-01-14 00:07:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7757665449 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547.0.0-n-16ff4e9fd7 calico-kube-controllers-7757665449-ss5lr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3fe9d74e896 [] [] }} ContainerID="2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81" Namespace="calico-system" Pod="calico-kube-controllers-7757665449-ss5lr" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--kube--controllers--7757665449--ss5lr-" Jan 14 00:08:30.008025 containerd[2043]: 2026-01-14 00:08:29.807 [INFO][5190] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81" Namespace="calico-system" Pod="calico-kube-controllers-7757665449-ss5lr" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--kube--controllers--7757665449--ss5lr-eth0" Jan 14 00:08:30.008025 containerd[2043]: 2026-01-14 00:08:29.830 [INFO][5215] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81" HandleID="k8s-pod-network.2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--kube--controllers--7757665449--ss5lr-eth0" Jan 14 00:08:30.008025 containerd[2043]: 2026-01-14 00:08:29.830 [INFO][5215] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81" HandleID="k8s-pod-network.2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--kube--controllers--7757665449--ss5lr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3050), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-n-16ff4e9fd7", "pod":"calico-kube-controllers-7757665449-ss5lr", "timestamp":"2026-01-14 00:08:29.830143017 +0000 UTC"}, Hostname:"ci-4547.0.0-n-16ff4e9fd7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:08:30.008025 containerd[2043]: 2026-01-14 00:08:29.830 [INFO][5215] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:08:30.008025 containerd[2043]: 2026-01-14 00:08:29.857 [INFO][5215] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:08:30.008025 containerd[2043]: 2026-01-14 00:08:29.858 [INFO][5215] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-16ff4e9fd7' Jan 14 00:08:30.008025 containerd[2043]: 2026-01-14 00:08:29.942 [INFO][5215] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:30.008025 containerd[2043]: 2026-01-14 00:08:29.947 [INFO][5215] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:30.008025 containerd[2043]: 2026-01-14 00:08:29.951 [INFO][5215] ipam/ipam.go 511: Trying affinity for 192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:30.008025 containerd[2043]: 2026-01-14 00:08:29.953 [INFO][5215] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:30.008025 containerd[2043]: 2026-01-14 00:08:29.955 [INFO][5215] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:30.008025 containerd[2043]: 2026-01-14 00:08:29.955 [INFO][5215] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:30.008025 containerd[2043]: 2026-01-14 00:08:29.956 [INFO][5215] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81 Jan 14 00:08:30.008025 containerd[2043]: 2026-01-14 00:08:29.970 [INFO][5215] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:30.008025 containerd[2043]: 2026-01-14 00:08:29.980 [INFO][5215] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.3/26] block=192.168.44.0/26 handle="k8s-pod-network.2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:30.008025 containerd[2043]: 2026-01-14 00:08:29.980 [INFO][5215] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.3/26] handle="k8s-pod-network.2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:30.008025 containerd[2043]: 2026-01-14 00:08:29.980 [INFO][5215] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:08:30.008025 containerd[2043]: 2026-01-14 00:08:29.980 [INFO][5215] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.3/26] IPv6=[] ContainerID="2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81" HandleID="k8s-pod-network.2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--kube--controllers--7757665449--ss5lr-eth0" Jan 14 00:08:30.017000 audit[5244]: NETFILTER_CFG table=filter:129 family=2 entries=40 op=nft_register_chain pid=5244 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:08:30.017000 audit[5244]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20764 a0=3 a1=ffffe87e5190 a2=0 a3=ffff99fa7fa8 items=0 ppid=5000 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.029432 containerd[2043]: 2026-01-14 00:08:29.982 [INFO][5190] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81" Namespace="calico-system" Pod="calico-kube-controllers-7757665449-ss5lr" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--kube--controllers--7757665449--ss5lr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--16ff4e9fd7-k8s-calico--kube--controllers--7757665449--ss5lr-eth0", GenerateName:"calico-kube-controllers-7757665449-", Namespace:"calico-system", SelfLink:"", UID:"0d1fcd1d-b4e7-438b-8969-12e4764b6063", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 7, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7757665449", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-16ff4e9fd7", ContainerID:"", Pod:"calico-kube-controllers-7757665449-ss5lr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3fe9d74e896", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:08:30.029432 containerd[2043]: 2026-01-14 00:08:29.982 [INFO][5190] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.3/32] ContainerID="2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81" Namespace="calico-system" Pod="calico-kube-controllers-7757665449-ss5lr" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--kube--controllers--7757665449--ss5lr-eth0" Jan 14 00:08:30.029432 containerd[2043]: 2026-01-14 00:08:29.982 [INFO][5190] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3fe9d74e896 ContainerID="2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81" Namespace="calico-system" Pod="calico-kube-controllers-7757665449-ss5lr" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--kube--controllers--7757665449--ss5lr-eth0" Jan 14 00:08:30.029432 containerd[2043]: 2026-01-14 00:08:29.987 [INFO][5190] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81" Namespace="calico-system" Pod="calico-kube-controllers-7757665449-ss5lr" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--kube--controllers--7757665449--ss5lr-eth0" Jan 14 00:08:30.029432 containerd[2043]: 2026-01-14 00:08:29.989 [INFO][5190] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81" Namespace="calico-system" Pod="calico-kube-controllers-7757665449-ss5lr" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--kube--controllers--7757665449--ss5lr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--16ff4e9fd7-k8s-calico--kube--controllers--7757665449--ss5lr-eth0", GenerateName:"calico-kube-controllers-7757665449-", Namespace:"calico-system", SelfLink:"", UID:"0d1fcd1d-b4e7-438b-8969-12e4764b6063", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 7, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7757665449", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-16ff4e9fd7", ContainerID:"2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81", Pod:"calico-kube-controllers-7757665449-ss5lr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3fe9d74e896", MAC:"3e:96:fb:99:9c:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:08:30.029432 containerd[2043]: 2026-01-14 00:08:30.005 [INFO][5190] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81" Namespace="calico-system" Pod="calico-kube-controllers-7757665449-ss5lr" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--kube--controllers--7757665449--ss5lr-eth0" Jan 14 00:08:30.048127 kernel: audit: type=1325 audit(1768349310.017:674): table=filter:129 family=2 entries=40 op=nft_register_chain pid=5244 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:08:30.048261 kernel: audit: type=1300 audit(1768349310.017:674): arch=c00000b7 syscall=211 success=yes exit=20764 a0=3 a1=ffffe87e5190 a2=0 a3=ffff99fa7fa8 items=0 ppid=5000 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.017000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:08:30.060481 kernel: audit: type=1327 audit(1768349310.017:674): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:08:30.104659 containerd[2043]: time="2026-01-14T00:08:30.104619667Z" level=info msg="connecting to shim 8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a" address="unix:///run/containerd/s/11c3a97d852a7a7ba7f3b3a3991b162905c61a52764d659bce09d09048c86a5c" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:08:30.126227 systemd[1]: Started cri-containerd-8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a.scope - libcontainer container 8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a. Jan 14 00:08:30.131005 containerd[2043]: time="2026-01-14T00:08:30.130811845Z" level=info msg="connecting to shim 2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81" address="unix:///run/containerd/s/cccde0fcfaa92d3cc187d5289733be7705eb9376cb53b6fed8dcb738c3aa9ecd" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:08:30.140000 audit: BPF prog-id=235 op=LOAD Jan 14 00:08:30.145000 audit: BPF prog-id=236 op=LOAD Jan 14 00:08:30.150971 kernel: audit: type=1334 audit(1768349310.140:675): prog-id=235 op=LOAD Jan 14 00:08:30.151055 kernel: audit: type=1334 audit(1768349310.145:676): prog-id=236 op=LOAD Jan 14 00:08:30.145000 audit[5265]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5252 pid=5265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.169869 kernel: audit: type=1300 audit(1768349310.145:676): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5252 pid=5265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.145000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861313462303861633165316637653131623030326461303130346633 Jan 14 00:08:30.189118 kernel: audit: type=1327 audit(1768349310.145:676): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861313462303861633165316637653131623030326461303130346633 Jan 14 00:08:30.145000 audit: BPF prog-id=236 op=UNLOAD Jan 14 00:08:30.145000 audit[5265]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5252 pid=5265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.145000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861313462303861633165316637653131623030326461303130346633 Jan 14 00:08:30.145000 audit: BPF prog-id=237 op=LOAD Jan 14 00:08:30.145000 audit[5265]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5252 pid=5265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.145000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861313462303861633165316637653131623030326461303130346633 Jan 14 00:08:30.150000 audit: BPF prog-id=238 op=LOAD Jan 14 00:08:30.150000 audit[5265]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5252 pid=5265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861313462303861633165316637653131623030326461303130346633 Jan 14 00:08:30.150000 audit: BPF prog-id=238 op=UNLOAD Jan 14 00:08:30.150000 audit[5265]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5252 pid=5265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861313462303861633165316637653131623030326461303130346633 Jan 14 00:08:30.150000 audit: BPF prog-id=237 op=UNLOAD Jan 14 00:08:30.150000 audit[5265]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5252 pid=5265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861313462303861633165316637653131623030326461303130346633 Jan 14 00:08:30.150000 audit: BPF prog-id=239 op=LOAD Jan 14 00:08:30.150000 audit[5265]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5252 pid=5265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861313462303861633165316637653131623030326461303130346633 Jan 14 00:08:30.198224 systemd[1]: Started cri-containerd-2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81.scope - libcontainer container 2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81. Jan 14 00:08:30.209000 audit: BPF prog-id=240 op=LOAD Jan 14 00:08:30.211000 audit: BPF prog-id=241 op=LOAD Jan 14 00:08:30.211000 audit[5304]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5285 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265323436363435376134626163363338656538326230653965353634 Jan 14 00:08:30.211000 audit: BPF prog-id=241 op=UNLOAD Jan 14 00:08:30.211000 audit[5304]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5285 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265323436363435376134626163363338656538326230653965353634 Jan 14 00:08:30.211000 audit: BPF prog-id=242 op=LOAD Jan 14 00:08:30.211000 audit[5304]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5285 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265323436363435376134626163363338656538326230653965353634 Jan 14 00:08:30.212000 audit: BPF prog-id=243 op=LOAD Jan 14 00:08:30.212000 audit[5304]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5285 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265323436363435376134626163363338656538326230653965353634 Jan 14 00:08:30.212000 audit: BPF prog-id=243 op=UNLOAD Jan 14 00:08:30.212000 audit[5304]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5285 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265323436363435376134626163363338656538326230653965353634 Jan 14 00:08:30.212000 audit: BPF prog-id=242 op=UNLOAD Jan 14 00:08:30.212000 audit[5304]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5285 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265323436363435376134626163363338656538326230653965353634 Jan 14 00:08:30.212000 audit: BPF prog-id=244 op=LOAD Jan 14 00:08:30.212000 audit[5304]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5285 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265323436363435376134626163363338656538326230653965353634 Jan 14 00:08:30.216871 containerd[2043]: time="2026-01-14T00:08:30.216777963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pf5pb,Uid:293030d0-9140-4eed-bcb6-fdd77ad1a81b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a\"" Jan 14 00:08:30.226551 containerd[2043]: time="2026-01-14T00:08:30.226097619Z" level=info msg="CreateContainer within sandbox \"8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 00:08:30.245943 containerd[2043]: time="2026-01-14T00:08:30.245903069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7757665449-ss5lr,Uid:0d1fcd1d-b4e7-438b-8969-12e4764b6063,Namespace:calico-system,Attempt:0,} returns sandbox id \"2e2466457a4bac638ee82b0e9e564f0f6afbe7317d97ab92a1a2ffe67c15fe81\"" Jan 14 00:08:30.247740 containerd[2043]: time="2026-01-14T00:08:30.247646650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 00:08:30.253744 containerd[2043]: time="2026-01-14T00:08:30.253324428Z" level=info msg="Container 8e7cfffd5d2951501aad40ec0fe8ce767ae6f740ab5abcc6fca7fe757a85ea60: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:08:30.270623 containerd[2043]: time="2026-01-14T00:08:30.270511544Z" level=info msg="CreateContainer within sandbox \"8a14b08ac1e1f7e11b002da0104f3d9fb9b642ccc46162ac9e137dd56eb8f72a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8e7cfffd5d2951501aad40ec0fe8ce767ae6f740ab5abcc6fca7fe757a85ea60\"" Jan 14 00:08:30.272027 containerd[2043]: time="2026-01-14T00:08:30.271764382Z" level=info msg="StartContainer for \"8e7cfffd5d2951501aad40ec0fe8ce767ae6f740ab5abcc6fca7fe757a85ea60\"" Jan 14 00:08:30.273632 containerd[2043]: time="2026-01-14T00:08:30.273598773Z" level=info msg="connecting to shim 8e7cfffd5d2951501aad40ec0fe8ce767ae6f740ab5abcc6fca7fe757a85ea60" address="unix:///run/containerd/s/11c3a97d852a7a7ba7f3b3a3991b162905c61a52764d659bce09d09048c86a5c" protocol=ttrpc version=3 Jan 14 00:08:30.289162 systemd[1]: Started cri-containerd-8e7cfffd5d2951501aad40ec0fe8ce767ae6f740ab5abcc6fca7fe757a85ea60.scope - libcontainer container 8e7cfffd5d2951501aad40ec0fe8ce767ae6f740ab5abcc6fca7fe757a85ea60. Jan 14 00:08:30.298000 audit: BPF prog-id=245 op=LOAD Jan 14 00:08:30.298000 audit: BPF prog-id=246 op=LOAD Jan 14 00:08:30.298000 audit[5337]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5252 pid=5337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865376366666664356432393531353031616164343065633066653863 Jan 14 00:08:30.299000 audit: BPF prog-id=246 op=UNLOAD Jan 14 00:08:30.299000 audit[5337]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5252 pid=5337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865376366666664356432393531353031616164343065633066653863 Jan 14 00:08:30.299000 audit: BPF prog-id=247 op=LOAD Jan 14 00:08:30.299000 audit[5337]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5252 pid=5337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865376366666664356432393531353031616164343065633066653863 Jan 14 00:08:30.299000 audit: BPF prog-id=248 op=LOAD Jan 14 00:08:30.299000 audit[5337]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5252 pid=5337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865376366666664356432393531353031616164343065633066653863 Jan 14 00:08:30.299000 audit: BPF prog-id=248 op=UNLOAD Jan 14 00:08:30.299000 audit[5337]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5252 pid=5337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865376366666664356432393531353031616164343065633066653863 Jan 14 00:08:30.299000 audit: BPF prog-id=247 op=UNLOAD Jan 14 00:08:30.299000 audit[5337]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5252 pid=5337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865376366666664356432393531353031616164343065633066653863 Jan 14 00:08:30.299000 audit: BPF prog-id=249 op=LOAD Jan 14 00:08:30.299000 audit[5337]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5252 pid=5337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865376366666664356432393531353031616164343065633066653863 Jan 14 00:08:30.318061 containerd[2043]: time="2026-01-14T00:08:30.317929304Z" level=info msg="StartContainer for \"8e7cfffd5d2951501aad40ec0fe8ce767ae6f740ab5abcc6fca7fe757a85ea60\" returns successfully" Jan 14 00:08:30.553979 containerd[2043]: time="2026-01-14T00:08:30.553700080Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:08:30.556956 containerd[2043]: time="2026-01-14T00:08:30.556813485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 00:08:30.556956 containerd[2043]: time="2026-01-14T00:08:30.556906608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 00:08:30.557176 kubelet[3576]: E0114 00:08:30.557110 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:08:30.557176 kubelet[3576]: E0114 00:08:30.557172 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:08:30.557821 kubelet[3576]: E0114 00:08:30.557244 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7757665449-ss5lr_calico-system(0d1fcd1d-b4e7-438b-8969-12e4764b6063): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 00:08:30.557821 kubelet[3576]: E0114 00:08:30.557273 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7757665449-ss5lr" podUID="0d1fcd1d-b4e7-438b-8969-12e4764b6063" Jan 14 00:08:30.739099 containerd[2043]: time="2026-01-14T00:08:30.739048774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9k9wp,Uid:f82b7507-d8a2-4f12-bcc9-0288422aaee6,Namespace:kube-system,Attempt:0,}" Jan 14 00:08:30.743715 containerd[2043]: time="2026-01-14T00:08:30.743679721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df7698878-xcd87,Uid:703a17f2-f0e0-477c-b942-3e7b76e59fda,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:08:30.748883 containerd[2043]: time="2026-01-14T00:08:30.748795795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f767b88f-ntdwm,Uid:ce38d8d5-b119-4ec5-8427-02101a96fcd0,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:08:30.753294 containerd[2043]: time="2026-01-14T00:08:30.753267521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zrb5n,Uid:a80d0249-070a-486c-a74c-948bf824745a,Namespace:calico-system,Attempt:0,}" Jan 14 00:08:30.929457 kubelet[3576]: E0114 00:08:30.929150 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7757665449-ss5lr" podUID="0d1fcd1d-b4e7-438b-8969-12e4764b6063" Jan 14 00:08:30.942977 systemd-networkd[1621]: cali6d57579367c: Link UP Jan 14 00:08:30.943600 systemd-networkd[1621]: cali6d57579367c: Gained carrier Jan 14 00:08:30.968199 containerd[2043]: 2026-01-14 00:08:30.826 [INFO][5368] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--9k9wp-eth0 coredns-66bc5c9577- kube-system f82b7507-d8a2-4f12-bcc9-0288422aaee6 831 0 2026-01-14 00:07:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-n-16ff4e9fd7 coredns-66bc5c9577-9k9wp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6d57579367c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa" Namespace="kube-system" Pod="coredns-66bc5c9577-9k9wp" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--9k9wp-" Jan 14 00:08:30.968199 containerd[2043]: 2026-01-14 00:08:30.826 [INFO][5368] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa" Namespace="kube-system" Pod="coredns-66bc5c9577-9k9wp" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--9k9wp-eth0" Jan 14 00:08:30.968199 containerd[2043]: 2026-01-14 00:08:30.879 [INFO][5416] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa" HandleID="k8s-pod-network.35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--9k9wp-eth0" Jan 14 00:08:30.968199 containerd[2043]: 2026-01-14 00:08:30.879 [INFO][5416] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa" HandleID="k8s-pod-network.35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--9k9wp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3160), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-n-16ff4e9fd7", "pod":"coredns-66bc5c9577-9k9wp", "timestamp":"2026-01-14 00:08:30.879350851 +0000 UTC"}, Hostname:"ci-4547.0.0-n-16ff4e9fd7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:08:30.968199 containerd[2043]: 2026-01-14 00:08:30.879 [INFO][5416] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:08:30.968199 containerd[2043]: 2026-01-14 00:08:30.879 [INFO][5416] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:08:30.968199 containerd[2043]: 2026-01-14 00:08:30.879 [INFO][5416] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-16ff4e9fd7' Jan 14 00:08:30.968199 containerd[2043]: 2026-01-14 00:08:30.889 [INFO][5416] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:30.968199 containerd[2043]: 2026-01-14 00:08:30.893 [INFO][5416] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:30.968199 containerd[2043]: 2026-01-14 00:08:30.900 [INFO][5416] ipam/ipam.go 511: Trying affinity for 192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:30.968199 containerd[2043]: 2026-01-14 00:08:30.903 [INFO][5416] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:30.968199 containerd[2043]: 2026-01-14 00:08:30.907 [INFO][5416] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:30.968199 containerd[2043]: 2026-01-14 00:08:30.907 [INFO][5416] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:30.968199 containerd[2043]: 2026-01-14 00:08:30.909 [INFO][5416] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa Jan 14 00:08:30.968199 containerd[2043]: 2026-01-14 00:08:30.918 [INFO][5416] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:30.968199 containerd[2043]: 2026-01-14 00:08:30.932 [INFO][5416] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.4/26] block=192.168.44.0/26 handle="k8s-pod-network.35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:30.968199 containerd[2043]: 2026-01-14 00:08:30.932 [INFO][5416] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.4/26] handle="k8s-pod-network.35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:30.968199 containerd[2043]: 2026-01-14 00:08:30.933 [INFO][5416] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:08:30.968199 containerd[2043]: 2026-01-14 00:08:30.933 [INFO][5416] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.4/26] IPv6=[] ContainerID="35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa" HandleID="k8s-pod-network.35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--9k9wp-eth0" Jan 14 00:08:30.968648 containerd[2043]: 2026-01-14 00:08:30.937 [INFO][5368] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa" Namespace="kube-system" Pod="coredns-66bc5c9577-9k9wp" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--9k9wp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--9k9wp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f82b7507-d8a2-4f12-bcc9-0288422aaee6", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 7, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-16ff4e9fd7", ContainerID:"", Pod:"coredns-66bc5c9577-9k9wp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6d57579367c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:08:30.968648 containerd[2043]: 2026-01-14 00:08:30.937 [INFO][5368] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.4/32] ContainerID="35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa" Namespace="kube-system" Pod="coredns-66bc5c9577-9k9wp" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--9k9wp-eth0" Jan 14 00:08:30.968648 containerd[2043]: 2026-01-14 00:08:30.937 [INFO][5368] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d57579367c ContainerID="35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa" Namespace="kube-system" Pod="coredns-66bc5c9577-9k9wp" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--9k9wp-eth0" Jan 14 00:08:30.968648 containerd[2043]: 2026-01-14 00:08:30.944 [INFO][5368] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa" Namespace="kube-system" Pod="coredns-66bc5c9577-9k9wp" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--9k9wp-eth0" Jan 14 00:08:30.968648 containerd[2043]: 2026-01-14 00:08:30.944 [INFO][5368] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa" Namespace="kube-system" Pod="coredns-66bc5c9577-9k9wp" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--9k9wp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--9k9wp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f82b7507-d8a2-4f12-bcc9-0288422aaee6", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 7, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-16ff4e9fd7", ContainerID:"35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa", Pod:"coredns-66bc5c9577-9k9wp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6d57579367c", MAC:"a6:ea:28:b3:e9:ca", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:08:30.968762 containerd[2043]: 2026-01-14 00:08:30.964 [INFO][5368] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa" Namespace="kube-system" Pod="coredns-66bc5c9577-9k9wp" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-coredns--66bc5c9577--9k9wp-eth0" Jan 14 00:08:30.985000 audit[5454]: NETFILTER_CFG table=filter:130 family=2 entries=40 op=nft_register_chain pid=5454 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:08:30.985000 audit[5454]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20344 a0=3 a1=ffffe1c5fac0 a2=0 a3=ffff9eb6bfa8 items=0 ppid=5000 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:30.985000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:08:31.006000 audit[5457]: NETFILTER_CFG table=filter:131 family=2 entries=17 op=nft_register_rule pid=5457 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:08:31.006000 audit[5457]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffee298340 a2=0 a3=1 items=0 ppid=3727 pid=5457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.006000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:08:31.011000 audit[5457]: NETFILTER_CFG table=nat:132 family=2 entries=35 op=nft_register_chain pid=5457 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:08:31.011000 audit[5457]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffee298340 a2=0 a3=1 items=0 ppid=3727 pid=5457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.011000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:08:31.034927 systemd-networkd[1621]: cali3edb4930aef: Link UP Jan 14 00:08:31.036166 systemd-networkd[1621]: cali3edb4930aef: Gained carrier Jan 14 00:08:31.053094 kubelet[3576]: I0114 00:08:31.052441 3576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pf5pb" podStartSLOduration=60.05242212 podStartE2EDuration="1m0.05242212s" podCreationTimestamp="2026-01-14 00:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:08:30.983772842 +0000 UTC m=+65.362175025" watchObservedRunningTime="2026-01-14 00:08:31.05242212 +0000 UTC m=+65.430824303" Jan 14 00:08:31.054680 containerd[2043]: 2026-01-14 00:08:30.829 [INFO][5372] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--xcd87-eth0 calico-apiserver-5df7698878- calico-apiserver 703a17f2-f0e0-477c-b942-3e7b76e59fda 834 0 2026-01-14 00:07:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5df7698878 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-n-16ff4e9fd7 calico-apiserver-5df7698878-xcd87 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3edb4930aef [] [] }} ContainerID="d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b" Namespace="calico-apiserver" Pod="calico-apiserver-5df7698878-xcd87" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--xcd87-" Jan 14 00:08:31.054680 containerd[2043]: 2026-01-14 00:08:30.829 [INFO][5372] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b" Namespace="calico-apiserver" Pod="calico-apiserver-5df7698878-xcd87" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--xcd87-eth0" Jan 14 00:08:31.054680 containerd[2043]: 2026-01-14 00:08:30.894 [INFO][5417] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b" HandleID="k8s-pod-network.d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--xcd87-eth0" Jan 14 00:08:31.054680 containerd[2043]: 2026-01-14 00:08:30.894 [INFO][5417] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b" HandleID="k8s-pod-network.d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--xcd87-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b900), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-n-16ff4e9fd7", "pod":"calico-apiserver-5df7698878-xcd87", "timestamp":"2026-01-14 00:08:30.894088493 +0000 UTC"}, Hostname:"ci-4547.0.0-n-16ff4e9fd7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:08:31.054680 containerd[2043]: 2026-01-14 00:08:30.894 [INFO][5417] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:08:31.054680 containerd[2043]: 2026-01-14 00:08:30.934 [INFO][5417] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:08:31.054680 containerd[2043]: 2026-01-14 00:08:30.934 [INFO][5417] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-16ff4e9fd7' Jan 14 00:08:31.054680 containerd[2043]: 2026-01-14 00:08:30.989 [INFO][5417] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.054680 containerd[2043]: 2026-01-14 00:08:30.998 [INFO][5417] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.054680 containerd[2043]: 2026-01-14 00:08:31.008 [INFO][5417] ipam/ipam.go 511: Trying affinity for 192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.054680 containerd[2043]: 2026-01-14 00:08:31.011 [INFO][5417] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.054680 containerd[2043]: 2026-01-14 00:08:31.014 [INFO][5417] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.054680 containerd[2043]: 2026-01-14 00:08:31.014 [INFO][5417] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.054680 containerd[2043]: 2026-01-14 00:08:31.015 [INFO][5417] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b Jan 14 00:08:31.054680 containerd[2043]: 2026-01-14 00:08:31.020 [INFO][5417] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.054680 containerd[2043]: 2026-01-14 00:08:31.029 [INFO][5417] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.5/26] block=192.168.44.0/26 handle="k8s-pod-network.d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.054680 containerd[2043]: 2026-01-14 00:08:31.029 [INFO][5417] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.5/26] handle="k8s-pod-network.d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.054680 containerd[2043]: 2026-01-14 00:08:31.030 [INFO][5417] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:08:31.054680 containerd[2043]: 2026-01-14 00:08:31.030 [INFO][5417] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.5/26] IPv6=[] ContainerID="d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b" HandleID="k8s-pod-network.d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--xcd87-eth0" Jan 14 00:08:31.055153 containerd[2043]: 2026-01-14 00:08:31.032 [INFO][5372] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b" Namespace="calico-apiserver" Pod="calico-apiserver-5df7698878-xcd87" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--xcd87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--xcd87-eth0", GenerateName:"calico-apiserver-5df7698878-", Namespace:"calico-apiserver", SelfLink:"", UID:"703a17f2-f0e0-477c-b942-3e7b76e59fda", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df7698878", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-16ff4e9fd7", ContainerID:"", Pod:"calico-apiserver-5df7698878-xcd87", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3edb4930aef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:08:31.055153 containerd[2043]: 2026-01-14 00:08:31.032 [INFO][5372] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.5/32] ContainerID="d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b" Namespace="calico-apiserver" Pod="calico-apiserver-5df7698878-xcd87" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--xcd87-eth0" Jan 14 00:08:31.055153 containerd[2043]: 2026-01-14 00:08:31.032 [INFO][5372] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3edb4930aef ContainerID="d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b" Namespace="calico-apiserver" Pod="calico-apiserver-5df7698878-xcd87" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--xcd87-eth0" Jan 14 00:08:31.055153 containerd[2043]: 2026-01-14 00:08:31.036 [INFO][5372] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b" Namespace="calico-apiserver" Pod="calico-apiserver-5df7698878-xcd87" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--xcd87-eth0" Jan 14 00:08:31.055153 containerd[2043]: 2026-01-14 00:08:31.037 [INFO][5372] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b" Namespace="calico-apiserver" Pod="calico-apiserver-5df7698878-xcd87" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--xcd87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--xcd87-eth0", GenerateName:"calico-apiserver-5df7698878-", Namespace:"calico-apiserver", SelfLink:"", UID:"703a17f2-f0e0-477c-b942-3e7b76e59fda", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df7698878", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-16ff4e9fd7", ContainerID:"d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b", Pod:"calico-apiserver-5df7698878-xcd87", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3edb4930aef", MAC:"92:78:75:83:70:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:08:31.055153 containerd[2043]: 2026-01-14 00:08:31.051 [INFO][5372] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b" Namespace="calico-apiserver" Pod="calico-apiserver-5df7698878-xcd87" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--xcd87-eth0" Jan 14 00:08:31.067000 audit[5465]: NETFILTER_CFG table=filter:133 family=2 entries=68 op=nft_register_chain pid=5465 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:08:31.067000 audit[5465]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=34624 a0=3 a1=fffff623d6d0 a2=0 a3=ffff89ab7fa8 items=0 ppid=5000 pid=5465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.067000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:08:31.149396 systemd-networkd[1621]: calic772677a86d: Link UP Jan 14 00:08:31.151808 systemd-networkd[1621]: calic772677a86d: Gained carrier Jan 14 00:08:31.162873 containerd[2043]: time="2026-01-14T00:08:31.162823523Z" level=info msg="connecting to shim d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b" address="unix:///run/containerd/s/12305191ae5819128e2627aaf78495c537c04ab397e0bfce0c02dfb98b6f98bf" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:08:31.167017 containerd[2043]: time="2026-01-14T00:08:31.166680919Z" level=info msg="connecting to shim 35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa" address="unix:///run/containerd/s/f75b12198943253f93bbf0d819832679e5e0a48d84e24ad027a6bee23c35b477" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:08:31.183544 containerd[2043]: 2026-01-14 00:08:30.860 [INFO][5391] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--16ff4e9fd7-k8s-goldmane--7c778bb748--zrb5n-eth0 goldmane-7c778bb748- calico-system a80d0249-070a-486c-a74c-948bf824745a 836 0 2026-01-14 00:07:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547.0.0-n-16ff4e9fd7 goldmane-7c778bb748-zrb5n eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic772677a86d [] [] }} ContainerID="a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda" Namespace="calico-system" Pod="goldmane-7c778bb748-zrb5n" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-goldmane--7c778bb748--zrb5n-" Jan 14 00:08:31.183544 containerd[2043]: 2026-01-14 00:08:30.860 [INFO][5391] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda" Namespace="calico-system" Pod="goldmane-7c778bb748-zrb5n" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-goldmane--7c778bb748--zrb5n-eth0" Jan 14 00:08:31.183544 containerd[2043]: 2026-01-14 00:08:30.912 [INFO][5428] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda" HandleID="k8s-pod-network.a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-goldmane--7c778bb748--zrb5n-eth0" Jan 14 00:08:31.183544 containerd[2043]: 2026-01-14 00:08:30.913 [INFO][5428] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda" HandleID="k8s-pod-network.a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-goldmane--7c778bb748--zrb5n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-n-16ff4e9fd7", "pod":"goldmane-7c778bb748-zrb5n", "timestamp":"2026-01-14 00:08:30.912870585 +0000 UTC"}, Hostname:"ci-4547.0.0-n-16ff4e9fd7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:08:31.183544 containerd[2043]: 2026-01-14 00:08:30.913 [INFO][5428] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:08:31.183544 containerd[2043]: 2026-01-14 00:08:31.030 [INFO][5428] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:08:31.183544 containerd[2043]: 2026-01-14 00:08:31.030 [INFO][5428] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-16ff4e9fd7' Jan 14 00:08:31.183544 containerd[2043]: 2026-01-14 00:08:31.089 [INFO][5428] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.183544 containerd[2043]: 2026-01-14 00:08:31.097 [INFO][5428] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.183544 containerd[2043]: 2026-01-14 00:08:31.114 [INFO][5428] ipam/ipam.go 511: Trying affinity for 192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.183544 containerd[2043]: 2026-01-14 00:08:31.116 [INFO][5428] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.183544 containerd[2043]: 2026-01-14 00:08:31.118 [INFO][5428] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.183544 containerd[2043]: 2026-01-14 00:08:31.118 [INFO][5428] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.183544 containerd[2043]: 2026-01-14 00:08:31.119 [INFO][5428] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda Jan 14 00:08:31.183544 containerd[2043]: 2026-01-14 00:08:31.127 [INFO][5428] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.183544 containerd[2043]: 2026-01-14 00:08:31.135 [INFO][5428] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.6/26] block=192.168.44.0/26 handle="k8s-pod-network.a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.183544 containerd[2043]: 2026-01-14 00:08:31.136 [INFO][5428] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.6/26] handle="k8s-pod-network.a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.183544 containerd[2043]: 2026-01-14 00:08:31.136 [INFO][5428] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:08:31.183544 containerd[2043]: 2026-01-14 00:08:31.136 [INFO][5428] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.6/26] IPv6=[] ContainerID="a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda" HandleID="k8s-pod-network.a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-goldmane--7c778bb748--zrb5n-eth0" Jan 14 00:08:31.183950 containerd[2043]: 2026-01-14 00:08:31.145 [INFO][5391] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda" Namespace="calico-system" Pod="goldmane-7c778bb748-zrb5n" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-goldmane--7c778bb748--zrb5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--16ff4e9fd7-k8s-goldmane--7c778bb748--zrb5n-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"a80d0249-070a-486c-a74c-948bf824745a", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 7, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-16ff4e9fd7", ContainerID:"", Pod:"goldmane-7c778bb748-zrb5n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.44.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic772677a86d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:08:31.183950 containerd[2043]: 2026-01-14 00:08:31.145 [INFO][5391] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.6/32] ContainerID="a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda" Namespace="calico-system" Pod="goldmane-7c778bb748-zrb5n" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-goldmane--7c778bb748--zrb5n-eth0" Jan 14 00:08:31.183950 containerd[2043]: 2026-01-14 00:08:31.145 [INFO][5391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic772677a86d ContainerID="a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda" Namespace="calico-system" Pod="goldmane-7c778bb748-zrb5n" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-goldmane--7c778bb748--zrb5n-eth0" Jan 14 00:08:31.183950 containerd[2043]: 2026-01-14 00:08:31.154 [INFO][5391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda" Namespace="calico-system" Pod="goldmane-7c778bb748-zrb5n" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-goldmane--7c778bb748--zrb5n-eth0" Jan 14 00:08:31.183950 containerd[2043]: 2026-01-14 00:08:31.155 [INFO][5391] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda" Namespace="calico-system" Pod="goldmane-7c778bb748-zrb5n" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-goldmane--7c778bb748--zrb5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--16ff4e9fd7-k8s-goldmane--7c778bb748--zrb5n-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"a80d0249-070a-486c-a74c-948bf824745a", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 7, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-16ff4e9fd7", ContainerID:"a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda", Pod:"goldmane-7c778bb748-zrb5n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.44.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic772677a86d", MAC:"5e:3d:9c:9d:7f:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:08:31.183950 containerd[2043]: 2026-01-14 00:08:31.177 [INFO][5391] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda" Namespace="calico-system" Pod="goldmane-7c778bb748-zrb5n" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-goldmane--7c778bb748--zrb5n-eth0" Jan 14 00:08:31.191250 systemd[1]: Started cri-containerd-d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b.scope - libcontainer container d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b. Jan 14 00:08:31.208425 systemd[1]: Started cri-containerd-35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa.scope - libcontainer container 35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa. Jan 14 00:08:31.209000 audit: BPF prog-id=250 op=LOAD Jan 14 00:08:31.210000 audit: BPF prog-id=251 op=LOAD Jan 14 00:08:31.210000 audit[5499]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=5482 pid=5499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437303261646463623861616663313161373137353264373838376162 Jan 14 00:08:31.211000 audit: BPF prog-id=251 op=UNLOAD Jan 14 00:08:31.211000 audit[5499]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5482 pid=5499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437303261646463623861616663313161373137353264373838376162 Jan 14 00:08:31.211000 audit: BPF prog-id=252 op=LOAD Jan 14 00:08:31.211000 audit[5499]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=5482 pid=5499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437303261646463623861616663313161373137353264373838376162 Jan 14 00:08:31.211000 audit: BPF prog-id=253 op=LOAD Jan 14 00:08:31.211000 audit[5499]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=5482 pid=5499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437303261646463623861616663313161373137353264373838376162 Jan 14 00:08:31.211000 audit: BPF prog-id=253 op=UNLOAD Jan 14 00:08:31.211000 audit[5499]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5482 pid=5499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437303261646463623861616663313161373137353264373838376162 Jan 14 00:08:31.211000 audit: BPF prog-id=252 op=UNLOAD Jan 14 00:08:31.211000 audit[5499]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5482 pid=5499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437303261646463623861616663313161373137353264373838376162 Jan 14 00:08:31.211000 audit: BPF prog-id=254 op=LOAD Jan 14 00:08:31.211000 audit[5499]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=5482 pid=5499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437303261646463623861616663313161373137353264373838376162 Jan 14 00:08:31.222000 audit: BPF prog-id=255 op=LOAD Jan 14 00:08:31.224000 audit: BPF prog-id=256 op=LOAD Jan 14 00:08:31.224000 audit[5517]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=5487 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335663363613434333164316133613338643366626236303462346637 Jan 14 00:08:31.225000 audit: BPF prog-id=256 op=UNLOAD Jan 14 00:08:31.225000 audit[5517]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5487 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335663363613434333164316133613338643366626236303462346637 Jan 14 00:08:31.225000 audit[5556]: NETFILTER_CFG table=filter:134 family=2 entries=56 op=nft_register_chain pid=5556 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:08:31.225000 audit[5556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28728 a0=3 a1=ffffe3b80600 a2=0 a3=ffff90714fa8 items=0 ppid=5000 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.225000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:08:31.225000 audit: BPF prog-id=257 op=LOAD Jan 14 00:08:31.225000 audit[5517]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=5487 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335663363613434333164316133613338643366626236303462346637 Jan 14 00:08:31.226000 audit: BPF prog-id=258 op=LOAD Jan 14 00:08:31.226000 audit[5517]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=5487 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335663363613434333164316133613338643366626236303462346637 Jan 14 00:08:31.227000 audit: BPF prog-id=258 op=UNLOAD Jan 14 00:08:31.227000 audit[5517]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5487 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335663363613434333164316133613338643366626236303462346637 Jan 14 00:08:31.227000 audit: BPF prog-id=257 op=UNLOAD Jan 14 00:08:31.227000 audit[5517]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5487 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335663363613434333164316133613338643366626236303462346637 Jan 14 00:08:31.227000 audit: BPF prog-id=259 op=LOAD Jan 14 00:08:31.227000 audit[5517]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=5487 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335663363613434333164316133613338643366626236303462346637 Jan 14 00:08:31.254289 containerd[2043]: time="2026-01-14T00:08:31.254242556Z" level=info msg="connecting to shim a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda" address="unix:///run/containerd/s/6473489a35456a2e0491e8db8bf0a8b01abd09ae05d3030acb7e983b127a6ea1" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:08:31.285536 containerd[2043]: time="2026-01-14T00:08:31.285466966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9k9wp,Uid:f82b7507-d8a2-4f12-bcc9-0288422aaee6,Namespace:kube-system,Attempt:0,} returns sandbox id \"35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa\"" Jan 14 00:08:31.292050 systemd-networkd[1621]: calid4cf5bbd1c7: Link UP Jan 14 00:08:31.293680 systemd-networkd[1621]: calid4cf5bbd1c7: Gained carrier Jan 14 00:08:31.301172 containerd[2043]: time="2026-01-14T00:08:31.301128140Z" level=info msg="CreateContainer within sandbox \"35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 00:08:31.320719 containerd[2043]: 2026-01-14 00:08:30.885 [INFO][5400] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--79f767b88f--ntdwm-eth0 calico-apiserver-79f767b88f- calico-apiserver ce38d8d5-b119-4ec5-8427-02101a96fcd0 837 0 2026-01-14 00:07:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79f767b88f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-n-16ff4e9fd7 calico-apiserver-79f767b88f-ntdwm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid4cf5bbd1c7 [] [] }} ContainerID="db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7" Namespace="calico-apiserver" Pod="calico-apiserver-79f767b88f-ntdwm" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--79f767b88f--ntdwm-" Jan 14 00:08:31.320719 containerd[2043]: 2026-01-14 00:08:30.885 [INFO][5400] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7" Namespace="calico-apiserver" Pod="calico-apiserver-79f767b88f-ntdwm" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--79f767b88f--ntdwm-eth0" Jan 14 00:08:31.320719 containerd[2043]: 2026-01-14 00:08:30.919 [INFO][5438] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7" HandleID="k8s-pod-network.db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--79f767b88f--ntdwm-eth0" Jan 14 00:08:31.320719 containerd[2043]: 2026-01-14 00:08:30.919 [INFO][5438] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7" HandleID="k8s-pod-network.db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--79f767b88f--ntdwm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-n-16ff4e9fd7", "pod":"calico-apiserver-79f767b88f-ntdwm", "timestamp":"2026-01-14 00:08:30.919273634 +0000 UTC"}, Hostname:"ci-4547.0.0-n-16ff4e9fd7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:08:31.320719 containerd[2043]: 2026-01-14 00:08:30.919 [INFO][5438] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:08:31.320719 containerd[2043]: 2026-01-14 00:08:31.136 [INFO][5438] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:08:31.320719 containerd[2043]: 2026-01-14 00:08:31.136 [INFO][5438] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-16ff4e9fd7' Jan 14 00:08:31.320719 containerd[2043]: 2026-01-14 00:08:31.192 [INFO][5438] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.320719 containerd[2043]: 2026-01-14 00:08:31.200 [INFO][5438] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.320719 containerd[2043]: 2026-01-14 00:08:31.220 [INFO][5438] ipam/ipam.go 511: Trying affinity for 192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.320719 containerd[2043]: 2026-01-14 00:08:31.225 [INFO][5438] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.320719 containerd[2043]: 2026-01-14 00:08:31.230 [INFO][5438] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.320719 containerd[2043]: 2026-01-14 00:08:31.230 [INFO][5438] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.320719 containerd[2043]: 2026-01-14 00:08:31.233 [INFO][5438] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7 Jan 14 00:08:31.320719 containerd[2043]: 2026-01-14 00:08:31.242 [INFO][5438] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.320719 containerd[2043]: 2026-01-14 00:08:31.258 [INFO][5438] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.7/26] block=192.168.44.0/26 handle="k8s-pod-network.db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.320719 containerd[2043]: 2026-01-14 00:08:31.259 [INFO][5438] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.7/26] handle="k8s-pod-network.db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.320719 containerd[2043]: 2026-01-14 00:08:31.259 [INFO][5438] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:08:31.320719 containerd[2043]: 2026-01-14 00:08:31.259 [INFO][5438] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.7/26] IPv6=[] ContainerID="db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7" HandleID="k8s-pod-network.db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--79f767b88f--ntdwm-eth0" Jan 14 00:08:31.321519 containerd[2043]: 2026-01-14 00:08:31.266 [INFO][5400] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7" Namespace="calico-apiserver" Pod="calico-apiserver-79f767b88f-ntdwm" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--79f767b88f--ntdwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--79f767b88f--ntdwm-eth0", GenerateName:"calico-apiserver-79f767b88f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce38d8d5-b119-4ec5-8427-02101a96fcd0", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 7, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f767b88f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-16ff4e9fd7", ContainerID:"", Pod:"calico-apiserver-79f767b88f-ntdwm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid4cf5bbd1c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:08:31.321519 containerd[2043]: 2026-01-14 00:08:31.266 [INFO][5400] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.7/32] ContainerID="db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7" Namespace="calico-apiserver" Pod="calico-apiserver-79f767b88f-ntdwm" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--79f767b88f--ntdwm-eth0" Jan 14 00:08:31.321519 containerd[2043]: 2026-01-14 00:08:31.267 [INFO][5400] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4cf5bbd1c7 ContainerID="db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7" Namespace="calico-apiserver" Pod="calico-apiserver-79f767b88f-ntdwm" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--79f767b88f--ntdwm-eth0" Jan 14 00:08:31.321519 containerd[2043]: 2026-01-14 00:08:31.295 [INFO][5400] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7" Namespace="calico-apiserver" Pod="calico-apiserver-79f767b88f-ntdwm" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--79f767b88f--ntdwm-eth0" Jan 14 00:08:31.321519 containerd[2043]: 2026-01-14 00:08:31.295 [INFO][5400] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7" Namespace="calico-apiserver" Pod="calico-apiserver-79f767b88f-ntdwm" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--79f767b88f--ntdwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--79f767b88f--ntdwm-eth0", GenerateName:"calico-apiserver-79f767b88f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce38d8d5-b119-4ec5-8427-02101a96fcd0", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 7, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f767b88f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-16ff4e9fd7", ContainerID:"db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7", Pod:"calico-apiserver-79f767b88f-ntdwm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid4cf5bbd1c7", MAC:"16:e1:7b:b6:41:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:08:31.321519 containerd[2043]: 2026-01-14 00:08:31.315 [INFO][5400] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7" Namespace="calico-apiserver" Pod="calico-apiserver-79f767b88f-ntdwm" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--79f767b88f--ntdwm-eth0" Jan 14 00:08:31.329904 systemd[1]: Started cri-containerd-a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda.scope - libcontainer container a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda. Jan 14 00:08:31.340183 containerd[2043]: time="2026-01-14T00:08:31.340133207Z" level=info msg="Container 8619041431955f4b5ff68bf988a092a07684f81e20e76ae04b6c3491d5d36936: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:08:31.343068 containerd[2043]: time="2026-01-14T00:08:31.342982909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df7698878-xcd87,Uid:703a17f2-f0e0-477c-b942-3e7b76e59fda,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d702addcb8aafc11a71752d7887abdf02bf178d7dd0dab903a67b82867c71c5b\"" Jan 14 00:08:31.343000 audit: BPF prog-id=260 op=LOAD Jan 14 00:08:31.344000 audit: BPF prog-id=261 op=LOAD Jan 14 00:08:31.344000 audit[5590]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=5566 pid=5590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132343663333135663338323432396433353464663265626562316538 Jan 14 00:08:31.344000 audit: BPF prog-id=261 op=UNLOAD Jan 14 00:08:31.344000 audit[5590]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5566 pid=5590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132343663333135663338323432396433353464663265626562316538 Jan 14 00:08:31.344000 audit: BPF prog-id=262 op=LOAD Jan 14 00:08:31.344000 audit[5590]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=5566 pid=5590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132343663333135663338323432396433353464663265626562316538 Jan 14 00:08:31.344000 audit: BPF prog-id=263 op=LOAD Jan 14 00:08:31.344000 audit[5590]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=5566 pid=5590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132343663333135663338323432396433353464663265626562316538 Jan 14 00:08:31.344000 audit: BPF prog-id=263 op=UNLOAD Jan 14 00:08:31.344000 audit[5590]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5566 pid=5590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132343663333135663338323432396433353464663265626562316538 Jan 14 00:08:31.344000 audit: BPF prog-id=262 op=UNLOAD Jan 14 00:08:31.344000 audit[5590]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5566 pid=5590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132343663333135663338323432396433353464663265626562316538 Jan 14 00:08:31.344000 audit: BPF prog-id=264 op=LOAD Jan 14 00:08:31.344000 audit[5590]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=5566 pid=5590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132343663333135663338323432396433353464663265626562316538 Jan 14 00:08:31.346000 audit[5624]: NETFILTER_CFG table=filter:135 family=2 entries=53 op=nft_register_chain pid=5624 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:08:31.346000 audit[5624]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26624 a0=3 a1=ffffcff0d5d0 a2=0 a3=ffff96d65fa8 items=0 ppid=5000 pid=5624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.346000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:08:31.349498 containerd[2043]: time="2026-01-14T00:08:31.349358884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:08:31.379570 systemd-networkd[1621]: calif7dc8709255: Gained IPv6LL Jan 14 00:08:31.384578 containerd[2043]: time="2026-01-14T00:08:31.384456258Z" level=info msg="CreateContainer within sandbox \"35f3ca4431d1a3a38d3fbb604b4f7b422f19e2728ec3f4bd4c5231dcdf8395aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8619041431955f4b5ff68bf988a092a07684f81e20e76ae04b6c3491d5d36936\"" Jan 14 00:08:31.386496 containerd[2043]: time="2026-01-14T00:08:31.386395092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zrb5n,Uid:a80d0249-070a-486c-a74c-948bf824745a,Namespace:calico-system,Attempt:0,} returns sandbox id \"a246c315f382429d354df2ebeb1e83940520374958d0db8c3899cfcbf3583dda\"" Jan 14 00:08:31.387182 containerd[2043]: time="2026-01-14T00:08:31.387128866Z" level=info msg="StartContainer for \"8619041431955f4b5ff68bf988a092a07684f81e20e76ae04b6c3491d5d36936\"" Jan 14 00:08:31.388353 containerd[2043]: time="2026-01-14T00:08:31.388325542Z" level=info msg="connecting to shim 8619041431955f4b5ff68bf988a092a07684f81e20e76ae04b6c3491d5d36936" address="unix:///run/containerd/s/f75b12198943253f93bbf0d819832679e5e0a48d84e24ad027a6bee23c35b477" protocol=ttrpc version=3 Jan 14 00:08:31.409444 containerd[2043]: time="2026-01-14T00:08:31.409380607Z" level=info msg="connecting to shim db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7" address="unix:///run/containerd/s/b88f1e27811e7ac7532829bd9957844cc7d84283eb1900275d70786c4a838931" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:08:31.412200 systemd[1]: Started cri-containerd-8619041431955f4b5ff68bf988a092a07684f81e20e76ae04b6c3491d5d36936.scope - libcontainer container 8619041431955f4b5ff68bf988a092a07684f81e20e76ae04b6c3491d5d36936. Jan 14 00:08:31.421000 audit: BPF prog-id=265 op=LOAD Jan 14 00:08:31.422000 audit: BPF prog-id=266 op=LOAD Jan 14 00:08:31.422000 audit[5631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=5487 pid=5631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836313930343134333139353566346235666636386266393838613039 Jan 14 00:08:31.422000 audit: BPF prog-id=266 op=UNLOAD Jan 14 00:08:31.422000 audit[5631]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5487 pid=5631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836313930343134333139353566346235666636386266393838613039 Jan 14 00:08:31.422000 audit: BPF prog-id=267 op=LOAD Jan 14 00:08:31.422000 audit[5631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=5487 pid=5631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836313930343134333139353566346235666636386266393838613039 Jan 14 00:08:31.422000 audit: BPF prog-id=268 op=LOAD Jan 14 00:08:31.422000 audit[5631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=5487 pid=5631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836313930343134333139353566346235666636386266393838613039 Jan 14 00:08:31.422000 audit: BPF prog-id=268 op=UNLOAD Jan 14 00:08:31.422000 audit[5631]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5487 pid=5631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836313930343134333139353566346235666636386266393838613039 Jan 14 00:08:31.422000 audit: BPF prog-id=267 op=UNLOAD Jan 14 00:08:31.422000 audit[5631]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5487 pid=5631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836313930343134333139353566346235666636386266393838613039 Jan 14 00:08:31.422000 audit: BPF prog-id=269 op=LOAD Jan 14 00:08:31.422000 audit[5631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=5487 pid=5631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836313930343134333139353566346235666636386266393838613039 Jan 14 00:08:31.434261 systemd[1]: Started cri-containerd-db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7.scope - libcontainer container db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7. Jan 14 00:08:31.452601 containerd[2043]: time="2026-01-14T00:08:31.452437476Z" level=info msg="StartContainer for \"8619041431955f4b5ff68bf988a092a07684f81e20e76ae04b6c3491d5d36936\" returns successfully" Jan 14 00:08:31.455000 audit: BPF prog-id=270 op=LOAD Jan 14 00:08:31.455000 audit: BPF prog-id=271 op=LOAD Jan 14 00:08:31.455000 audit[5663]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=5650 pid=5663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462396565303539666337323061326131663662376233386537666536 Jan 14 00:08:31.456000 audit: BPF prog-id=271 op=UNLOAD Jan 14 00:08:31.456000 audit[5663]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5650 pid=5663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462396565303539666337323061326131663662376233386537666536 Jan 14 00:08:31.456000 audit: BPF prog-id=272 op=LOAD Jan 14 00:08:31.456000 audit[5663]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=5650 pid=5663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462396565303539666337323061326131663662376233386537666536 Jan 14 00:08:31.456000 audit: BPF prog-id=273 op=LOAD Jan 14 00:08:31.456000 audit[5663]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=5650 pid=5663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462396565303539666337323061326131663662376233386537666536 Jan 14 00:08:31.456000 audit: BPF prog-id=273 op=UNLOAD Jan 14 00:08:31.456000 audit[5663]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5650 pid=5663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462396565303539666337323061326131663662376233386537666536 Jan 14 00:08:31.456000 audit: BPF prog-id=272 op=UNLOAD Jan 14 00:08:31.456000 audit[5663]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5650 pid=5663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462396565303539666337323061326131663662376233386537666536 Jan 14 00:08:31.456000 audit: BPF prog-id=274 op=LOAD Jan 14 00:08:31.456000 audit[5663]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=5650 pid=5663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462396565303539666337323061326131663662376233386537666536 Jan 14 00:08:31.489128 containerd[2043]: time="2026-01-14T00:08:31.489080040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f767b88f-ntdwm,Uid:ce38d8d5-b119-4ec5-8427-02101a96fcd0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"db9ee059fc720a2a1f6b7b38e7fe64ee1bbe592bfc4f7904a3af8fa57dabbee7\"" Jan 14 00:08:31.627970 containerd[2043]: time="2026-01-14T00:08:31.627915337Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:08:31.631496 containerd[2043]: time="2026-01-14T00:08:31.631453715Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:08:31.631618 containerd[2043]: time="2026-01-14T00:08:31.631542566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:08:31.631828 kubelet[3576]: E0114 00:08:31.631789 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:08:31.632609 kubelet[3576]: E0114 00:08:31.632245 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:08:31.632609 kubelet[3576]: E0114 00:08:31.632406 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5df7698878-xcd87_calico-apiserver(703a17f2-f0e0-477c-b942-3e7b76e59fda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:08:31.632609 kubelet[3576]: E0114 00:08:31.632444 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7698878-xcd87" podUID="703a17f2-f0e0-477c-b942-3e7b76e59fda" Jan 14 00:08:31.633118 containerd[2043]: time="2026-01-14T00:08:31.632886118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 00:08:31.738888 containerd[2043]: time="2026-01-14T00:08:31.738846747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df7698878-qxjdm,Uid:9bd87eab-d892-4bfa-b953-a8d30659ec75,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:08:31.826270 systemd-networkd[1621]: cali3fe9d74e896: Gained IPv6LL Jan 14 00:08:31.845194 systemd-networkd[1621]: calif726a13ba75: Link UP Jan 14 00:08:31.845869 systemd-networkd[1621]: calif726a13ba75: Gained carrier Jan 14 00:08:31.865250 containerd[2043]: 2026-01-14 00:08:31.781 [INFO][5708] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--qxjdm-eth0 calico-apiserver-5df7698878- calico-apiserver 9bd87eab-d892-4bfa-b953-a8d30659ec75 833 0 2026-01-14 00:07:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5df7698878 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-n-16ff4e9fd7 calico-apiserver-5df7698878-qxjdm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif726a13ba75 [] [] }} ContainerID="9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c" Namespace="calico-apiserver" Pod="calico-apiserver-5df7698878-qxjdm" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--qxjdm-" Jan 14 00:08:31.865250 containerd[2043]: 2026-01-14 00:08:31.781 [INFO][5708] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c" Namespace="calico-apiserver" Pod="calico-apiserver-5df7698878-qxjdm" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--qxjdm-eth0" Jan 14 00:08:31.865250 containerd[2043]: 2026-01-14 00:08:31.800 [INFO][5719] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c" HandleID="k8s-pod-network.9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--qxjdm-eth0" Jan 14 00:08:31.865250 containerd[2043]: 2026-01-14 00:08:31.800 [INFO][5719] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c" HandleID="k8s-pod-network.9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--qxjdm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-n-16ff4e9fd7", "pod":"calico-apiserver-5df7698878-qxjdm", "timestamp":"2026-01-14 00:08:31.800212279 +0000 UTC"}, Hostname:"ci-4547.0.0-n-16ff4e9fd7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:08:31.865250 containerd[2043]: 2026-01-14 00:08:31.800 [INFO][5719] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:08:31.865250 containerd[2043]: 2026-01-14 00:08:31.800 [INFO][5719] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:08:31.865250 containerd[2043]: 2026-01-14 00:08:31.800 [INFO][5719] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-16ff4e9fd7' Jan 14 00:08:31.865250 containerd[2043]: 2026-01-14 00:08:31.806 [INFO][5719] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.865250 containerd[2043]: 2026-01-14 00:08:31.810 [INFO][5719] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.865250 containerd[2043]: 2026-01-14 00:08:31.813 [INFO][5719] ipam/ipam.go 511: Trying affinity for 192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.865250 containerd[2043]: 2026-01-14 00:08:31.815 [INFO][5719] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.865250 containerd[2043]: 2026-01-14 00:08:31.819 [INFO][5719] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.865250 containerd[2043]: 2026-01-14 00:08:31.819 [INFO][5719] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.865250 containerd[2043]: 2026-01-14 00:08:31.821 [INFO][5719] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c Jan 14 00:08:31.865250 containerd[2043]: 2026-01-14 00:08:31.825 [INFO][5719] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.865250 containerd[2043]: 2026-01-14 00:08:31.837 [INFO][5719] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.8/26] block=192.168.44.0/26 handle="k8s-pod-network.9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.865250 containerd[2043]: 2026-01-14 00:08:31.838 [INFO][5719] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.8/26] handle="k8s-pod-network.9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:31.865250 containerd[2043]: 2026-01-14 00:08:31.838 [INFO][5719] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:08:31.865250 containerd[2043]: 2026-01-14 00:08:31.838 [INFO][5719] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.8/26] IPv6=[] ContainerID="9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c" HandleID="k8s-pod-network.9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--qxjdm-eth0" Jan 14 00:08:31.865893 containerd[2043]: 2026-01-14 00:08:31.841 [INFO][5708] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c" Namespace="calico-apiserver" Pod="calico-apiserver-5df7698878-qxjdm" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--qxjdm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--qxjdm-eth0", GenerateName:"calico-apiserver-5df7698878-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bd87eab-d892-4bfa-b953-a8d30659ec75", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df7698878", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-16ff4e9fd7", ContainerID:"", Pod:"calico-apiserver-5df7698878-qxjdm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif726a13ba75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:08:31.865893 containerd[2043]: 2026-01-14 00:08:31.841 [INFO][5708] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.8/32] ContainerID="9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c" Namespace="calico-apiserver" Pod="calico-apiserver-5df7698878-qxjdm" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--qxjdm-eth0" Jan 14 00:08:31.865893 containerd[2043]: 2026-01-14 00:08:31.841 [INFO][5708] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif726a13ba75 ContainerID="9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c" Namespace="calico-apiserver" Pod="calico-apiserver-5df7698878-qxjdm" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--qxjdm-eth0" Jan 14 00:08:31.865893 containerd[2043]: 2026-01-14 00:08:31.846 [INFO][5708] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c" Namespace="calico-apiserver" Pod="calico-apiserver-5df7698878-qxjdm" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--qxjdm-eth0" Jan 14 00:08:31.865893 containerd[2043]: 2026-01-14 00:08:31.846 [INFO][5708] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c" Namespace="calico-apiserver" Pod="calico-apiserver-5df7698878-qxjdm" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--qxjdm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--qxjdm-eth0", GenerateName:"calico-apiserver-5df7698878-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bd87eab-d892-4bfa-b953-a8d30659ec75", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df7698878", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-16ff4e9fd7", ContainerID:"9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c", Pod:"calico-apiserver-5df7698878-qxjdm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif726a13ba75", MAC:"0a:ec:cc:14:9f:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:08:31.865893 containerd[2043]: 2026-01-14 00:08:31.860 [INFO][5708] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c" Namespace="calico-apiserver" Pod="calico-apiserver-5df7698878-qxjdm" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-calico--apiserver--5df7698878--qxjdm-eth0" Jan 14 00:08:31.876000 audit[5734]: NETFILTER_CFG table=filter:136 family=2 entries=63 op=nft_register_chain pid=5734 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:08:31.876000 audit[5734]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30664 a0=3 a1=ffffd7522800 a2=0 a3=ffff9b3f6fa8 items=0 ppid=5000 pid=5734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.876000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:08:31.910709 containerd[2043]: time="2026-01-14T00:08:31.910570440Z" level=info msg="connecting to shim 9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c" address="unix:///run/containerd/s/b02358b46781bf983cc33295979ae310f781eba27c30bbf37f2aa516997c118e" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:08:31.912157 containerd[2043]: time="2026-01-14T00:08:31.912090718Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:08:31.916153 containerd[2043]: time="2026-01-14T00:08:31.916060901Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 00:08:31.916153 containerd[2043]: time="2026-01-14T00:08:31.916105527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 00:08:31.916967 kubelet[3576]: E0114 00:08:31.916435 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:08:31.916967 kubelet[3576]: E0114 00:08:31.916478 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:08:31.916967 kubelet[3576]: E0114 00:08:31.916640 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-zrb5n_calico-system(a80d0249-070a-486c-a74c-948bf824745a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 00:08:31.916967 kubelet[3576]: E0114 00:08:31.916671 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zrb5n" podUID="a80d0249-070a-486c-a74c-948bf824745a" Jan 14 00:08:31.918146 containerd[2043]: time="2026-01-14T00:08:31.917386189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:08:31.940200 kubelet[3576]: E0114 00:08:31.940067 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zrb5n" podUID="a80d0249-070a-486c-a74c-948bf824745a" Jan 14 00:08:31.940305 systemd[1]: Started cri-containerd-9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c.scope - libcontainer container 9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c. Jan 14 00:08:31.946224 kubelet[3576]: E0114 00:08:31.946181 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7698878-xcd87" podUID="703a17f2-f0e0-477c-b942-3e7b76e59fda" Jan 14 00:08:31.946327 kubelet[3576]: E0114 00:08:31.946257 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7757665449-ss5lr" podUID="0d1fcd1d-b4e7-438b-8969-12e4764b6063" Jan 14 00:08:31.957000 audit: BPF prog-id=275 op=LOAD Jan 14 00:08:31.958000 audit: BPF prog-id=276 op=LOAD Jan 14 00:08:31.958000 audit[5754]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5743 pid=5754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963383731323430393831653539313935626461613338353132633566 Jan 14 00:08:31.959000 audit: BPF prog-id=276 op=UNLOAD Jan 14 00:08:31.959000 audit[5754]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5743 pid=5754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963383731323430393831653539313935626461613338353132633566 Jan 14 00:08:31.959000 audit: BPF prog-id=277 op=LOAD Jan 14 00:08:31.959000 audit[5754]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5743 pid=5754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963383731323430393831653539313935626461613338353132633566 Jan 14 00:08:31.960000 audit: BPF prog-id=278 op=LOAD Jan 14 00:08:31.960000 audit[5754]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5743 pid=5754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963383731323430393831653539313935626461613338353132633566 Jan 14 00:08:31.960000 audit: BPF prog-id=278 op=UNLOAD Jan 14 00:08:31.960000 audit[5754]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5743 pid=5754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963383731323430393831653539313935626461613338353132633566 Jan 14 00:08:31.960000 audit: BPF prog-id=277 op=UNLOAD Jan 14 00:08:31.960000 audit[5754]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5743 pid=5754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963383731323430393831653539313935626461613338353132633566 Jan 14 00:08:31.960000 audit: BPF prog-id=279 op=LOAD Jan 14 00:08:31.960000 audit[5754]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5743 pid=5754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:31.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963383731323430393831653539313935626461613338353132633566 Jan 14 00:08:32.005166 containerd[2043]: time="2026-01-14T00:08:32.004911065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df7698878-qxjdm,Uid:9bd87eab-d892-4bfa-b953-a8d30659ec75,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9c871240981e59195bdaa38512c5fd579fc9a93771115fda395d4fa0f28de50c\"" Jan 14 00:08:32.022027 kubelet[3576]: I0114 00:08:32.021671 3576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9k9wp" podStartSLOduration=61.021655816 podStartE2EDuration="1m1.021655816s" podCreationTimestamp="2026-01-14 00:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:08:32.002660382 +0000 UTC m=+66.381062589" watchObservedRunningTime="2026-01-14 00:08:32.021655816 +0000 UTC m=+66.400057999" Jan 14 00:08:32.026000 audit[5781]: NETFILTER_CFG table=filter:137 family=2 entries=14 op=nft_register_rule pid=5781 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:08:32.026000 audit[5781]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc9a344d0 a2=0 a3=1 items=0 ppid=3727 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:32.026000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:08:32.039000 audit[5781]: NETFILTER_CFG table=nat:138 family=2 entries=56 op=nft_register_chain pid=5781 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:08:32.039000 audit[5781]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffc9a344d0 a2=0 a3=1 items=0 ppid=3727 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:32.039000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:08:32.146187 systemd-networkd[1621]: cali3edb4930aef: Gained IPv6LL Jan 14 00:08:32.189013 containerd[2043]: time="2026-01-14T00:08:32.188863325Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:08:32.192526 containerd[2043]: time="2026-01-14T00:08:32.192490778Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:08:32.192643 containerd[2043]: time="2026-01-14T00:08:32.192566452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:08:32.192762 kubelet[3576]: E0114 00:08:32.192723 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:08:32.192817 kubelet[3576]: E0114 00:08:32.192771 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:08:32.192966 kubelet[3576]: E0114 00:08:32.192935 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79f767b88f-ntdwm_calico-apiserver(ce38d8d5-b119-4ec5-8427-02101a96fcd0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:08:32.193029 kubelet[3576]: E0114 00:08:32.192964 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79f767b88f-ntdwm" podUID="ce38d8d5-b119-4ec5-8427-02101a96fcd0" Jan 14 00:08:32.193224 containerd[2043]: time="2026-01-14T00:08:32.193167790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:08:32.430745 containerd[2043]: time="2026-01-14T00:08:32.430446843Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:08:32.433888 containerd[2043]: time="2026-01-14T00:08:32.433783567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:08:32.433888 containerd[2043]: time="2026-01-14T00:08:32.433832984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:08:32.434054 kubelet[3576]: E0114 00:08:32.434011 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:08:32.434100 kubelet[3576]: E0114 00:08:32.434059 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:08:32.434142 kubelet[3576]: E0114 00:08:32.434124 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5df7698878-qxjdm_calico-apiserver(9bd87eab-d892-4bfa-b953-a8d30659ec75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:08:32.434191 kubelet[3576]: E0114 00:08:32.434153 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7698878-qxjdm" podUID="9bd87eab-d892-4bfa-b953-a8d30659ec75" Jan 14 00:08:32.722290 systemd-networkd[1621]: cali6d57579367c: Gained IPv6LL Jan 14 00:08:32.745773 containerd[2043]: time="2026-01-14T00:08:32.745712798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jrsmb,Uid:485f55bc-0719-47ec-b844-40d9b8f86f0d,Namespace:calico-system,Attempt:0,}" Jan 14 00:08:32.847987 systemd-networkd[1621]: cali53d4d5371c0: Link UP Jan 14 00:08:32.848111 systemd-networkd[1621]: cali53d4d5371c0: Gained carrier Jan 14 00:08:32.863829 containerd[2043]: 2026-01-14 00:08:32.786 [INFO][5784] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--16ff4e9fd7-k8s-csi--node--driver--jrsmb-eth0 csi-node-driver- calico-system 485f55bc-0719-47ec-b844-40d9b8f86f0d 716 0 2026-01-14 00:07:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547.0.0-n-16ff4e9fd7 csi-node-driver-jrsmb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali53d4d5371c0 [] [] }} ContainerID="a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70" Namespace="calico-system" Pod="csi-node-driver-jrsmb" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-csi--node--driver--jrsmb-" Jan 14 00:08:32.863829 containerd[2043]: 2026-01-14 00:08:32.786 [INFO][5784] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70" Namespace="calico-system" Pod="csi-node-driver-jrsmb" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-csi--node--driver--jrsmb-eth0" Jan 14 00:08:32.863829 containerd[2043]: 2026-01-14 00:08:32.803 [INFO][5796] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70" HandleID="k8s-pod-network.a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-csi--node--driver--jrsmb-eth0" Jan 14 00:08:32.863829 containerd[2043]: 2026-01-14 00:08:32.803 [INFO][5796] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70" HandleID="k8s-pod-network.a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-csi--node--driver--jrsmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b170), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-n-16ff4e9fd7", "pod":"csi-node-driver-jrsmb", "timestamp":"2026-01-14 00:08:32.803734644 +0000 UTC"}, Hostname:"ci-4547.0.0-n-16ff4e9fd7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:08:32.863829 containerd[2043]: 2026-01-14 00:08:32.803 [INFO][5796] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:08:32.863829 containerd[2043]: 2026-01-14 00:08:32.804 [INFO][5796] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:08:32.863829 containerd[2043]: 2026-01-14 00:08:32.804 [INFO][5796] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-16ff4e9fd7' Jan 14 00:08:32.863829 containerd[2043]: 2026-01-14 00:08:32.809 [INFO][5796] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:32.863829 containerd[2043]: 2026-01-14 00:08:32.813 [INFO][5796] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:32.863829 containerd[2043]: 2026-01-14 00:08:32.816 [INFO][5796] ipam/ipam.go 511: Trying affinity for 192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:32.863829 containerd[2043]: 2026-01-14 00:08:32.817 [INFO][5796] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:32.863829 containerd[2043]: 2026-01-14 00:08:32.819 [INFO][5796] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:32.863829 containerd[2043]: 2026-01-14 00:08:32.819 [INFO][5796] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:32.863829 containerd[2043]: 2026-01-14 00:08:32.820 [INFO][5796] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70 Jan 14 00:08:32.863829 containerd[2043]: 2026-01-14 00:08:32.827 [INFO][5796] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:32.863829 containerd[2043]: 2026-01-14 00:08:32.838 [INFO][5796] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.9/26] block=192.168.44.0/26 handle="k8s-pod-network.a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:32.863829 containerd[2043]: 2026-01-14 00:08:32.839 [INFO][5796] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.9/26] handle="k8s-pod-network.a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70" host="ci-4547.0.0-n-16ff4e9fd7" Jan 14 00:08:32.863829 containerd[2043]: 2026-01-14 00:08:32.839 [INFO][5796] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:08:32.863829 containerd[2043]: 2026-01-14 00:08:32.839 [INFO][5796] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.9/26] IPv6=[] ContainerID="a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70" HandleID="k8s-pod-network.a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70" Workload="ci--4547.0.0--n--16ff4e9fd7-k8s-csi--node--driver--jrsmb-eth0" Jan 14 00:08:32.864581 containerd[2043]: 2026-01-14 00:08:32.841 [INFO][5784] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70" Namespace="calico-system" Pod="csi-node-driver-jrsmb" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-csi--node--driver--jrsmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--16ff4e9fd7-k8s-csi--node--driver--jrsmb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"485f55bc-0719-47ec-b844-40d9b8f86f0d", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 7, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-16ff4e9fd7", ContainerID:"", Pod:"csi-node-driver-jrsmb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali53d4d5371c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:08:32.864581 containerd[2043]: 2026-01-14 00:08:32.841 [INFO][5784] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.9/32] ContainerID="a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70" Namespace="calico-system" Pod="csi-node-driver-jrsmb" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-csi--node--driver--jrsmb-eth0" Jan 14 00:08:32.864581 containerd[2043]: 2026-01-14 00:08:32.841 [INFO][5784] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53d4d5371c0 ContainerID="a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70" Namespace="calico-system" Pod="csi-node-driver-jrsmb" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-csi--node--driver--jrsmb-eth0" Jan 14 00:08:32.864581 containerd[2043]: 2026-01-14 00:08:32.845 [INFO][5784] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70" Namespace="calico-system" Pod="csi-node-driver-jrsmb" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-csi--node--driver--jrsmb-eth0" Jan 14 00:08:32.864581 containerd[2043]: 2026-01-14 00:08:32.846 [INFO][5784] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70" Namespace="calico-system" Pod="csi-node-driver-jrsmb" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-csi--node--driver--jrsmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--16ff4e9fd7-k8s-csi--node--driver--jrsmb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"485f55bc-0719-47ec-b844-40d9b8f86f0d", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 7, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-16ff4e9fd7", ContainerID:"a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70", Pod:"csi-node-driver-jrsmb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali53d4d5371c0", MAC:"1a:52:e3:49:a8:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:08:32.864581 containerd[2043]: 2026-01-14 00:08:32.859 [INFO][5784] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70" Namespace="calico-system" Pod="csi-node-driver-jrsmb" WorkloadEndpoint="ci--4547.0.0--n--16ff4e9fd7-k8s-csi--node--driver--jrsmb-eth0" Jan 14 00:08:32.874000 audit[5812]: NETFILTER_CFG table=filter:139 family=2 entries=56 op=nft_register_chain pid=5812 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:08:32.874000 audit[5812]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25484 a0=3 a1=ffffe83dc370 a2=0 a3=ffff97a35fa8 items=0 ppid=5000 pid=5812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:32.874000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:08:32.905039 containerd[2043]: time="2026-01-14T00:08:32.904960276Z" level=info msg="connecting to shim a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70" address="unix:///run/containerd/s/4b4e6577449cb92d44dec8d409874bdcbefd63b5cf074701a2682045657a8a3b" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:08:32.935222 systemd[1]: Started cri-containerd-a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70.scope - libcontainer container a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70. Jan 14 00:08:32.944000 audit: BPF prog-id=280 op=LOAD Jan 14 00:08:32.945000 audit: BPF prog-id=281 op=LOAD Jan 14 00:08:32.945000 audit[5835]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5824 pid=5835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:32.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136623965386539623938623766613533373162646439653837386439 Jan 14 00:08:32.945000 audit: BPF prog-id=281 op=UNLOAD Jan 14 00:08:32.945000 audit[5835]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5824 pid=5835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:32.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136623965386539623938623766613533373162646439653837386439 Jan 14 00:08:32.945000 audit: BPF prog-id=282 op=LOAD Jan 14 00:08:32.945000 audit[5835]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5824 pid=5835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:32.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136623965386539623938623766613533373162646439653837386439 Jan 14 00:08:32.945000 audit: BPF prog-id=283 op=LOAD Jan 14 00:08:32.945000 audit[5835]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5824 pid=5835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:32.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136623965386539623938623766613533373162646439653837386439 Jan 14 00:08:32.945000 audit: BPF prog-id=283 op=UNLOAD Jan 14 00:08:32.945000 audit[5835]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5824 pid=5835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:32.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136623965386539623938623766613533373162646439653837386439 Jan 14 00:08:32.945000 audit: BPF prog-id=282 op=UNLOAD Jan 14 00:08:32.945000 audit[5835]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5824 pid=5835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:32.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136623965386539623938623766613533373162646439653837386439 Jan 14 00:08:32.945000 audit: BPF prog-id=284 op=LOAD Jan 14 00:08:32.945000 audit[5835]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5824 pid=5835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:32.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136623965386539623938623766613533373162646439653837386439 Jan 14 00:08:32.949630 kubelet[3576]: E0114 00:08:32.949568 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7698878-qxjdm" podUID="9bd87eab-d892-4bfa-b953-a8d30659ec75" Jan 14 00:08:32.950738 kubelet[3576]: E0114 00:08:32.950660 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7698878-xcd87" podUID="703a17f2-f0e0-477c-b942-3e7b76e59fda" Jan 14 00:08:32.953532 kubelet[3576]: E0114 00:08:32.953331 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zrb5n" podUID="a80d0249-070a-486c-a74c-948bf824745a" Jan 14 00:08:32.953532 kubelet[3576]: E0114 00:08:32.953402 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79f767b88f-ntdwm" podUID="ce38d8d5-b119-4ec5-8427-02101a96fcd0" Jan 14 00:08:32.972976 containerd[2043]: time="2026-01-14T00:08:32.972623988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jrsmb,Uid:485f55bc-0719-47ec-b844-40d9b8f86f0d,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6b9e8e9b98b7fa5371bdd9e878d99411325faeef3eefb0017afd66d214b6b70\"" Jan 14 00:08:32.978830 containerd[2043]: time="2026-01-14T00:08:32.978232420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 00:08:32.979420 systemd-networkd[1621]: calic772677a86d: Gained IPv6LL Jan 14 00:08:32.979651 systemd-networkd[1621]: calid4cf5bbd1c7: Gained IPv6LL Jan 14 00:08:33.055000 audit[5862]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=5862 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:08:33.055000 audit[5862]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdc7a4d20 a2=0 a3=1 items=0 ppid=3727 pid=5862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:33.055000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:08:33.061000 audit[5862]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5862 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:08:33.061000 audit[5862]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdc7a4d20 a2=0 a3=1 items=0 ppid=3727 pid=5862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:33.061000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:08:33.234424 containerd[2043]: time="2026-01-14T00:08:33.234290877Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:08:33.238592 containerd[2043]: time="2026-01-14T00:08:33.238538796Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 00:08:33.238694 containerd[2043]: time="2026-01-14T00:08:33.238650688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 00:08:33.238863 kubelet[3576]: E0114 00:08:33.238824 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:08:33.238922 kubelet[3576]: E0114 00:08:33.238873 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:08:33.238956 kubelet[3576]: E0114 00:08:33.238937 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-jrsmb_calico-system(485f55bc-0719-47ec-b844-40d9b8f86f0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 00:08:33.240209 containerd[2043]: time="2026-01-14T00:08:33.239954391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 00:08:33.473987 containerd[2043]: time="2026-01-14T00:08:33.473931663Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:08:33.477585 containerd[2043]: time="2026-01-14T00:08:33.477545869Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 00:08:33.477676 containerd[2043]: time="2026-01-14T00:08:33.477639912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 00:08:33.477897 kubelet[3576]: E0114 00:08:33.477859 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:08:33.477965 kubelet[3576]: E0114 00:08:33.477909 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:08:33.478037 kubelet[3576]: E0114 00:08:33.478017 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-jrsmb_calico-system(485f55bc-0719-47ec-b844-40d9b8f86f0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 00:08:33.478357 kubelet[3576]: E0114 00:08:33.478078 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jrsmb" podUID="485f55bc-0719-47ec-b844-40d9b8f86f0d" Jan 14 00:08:33.490120 systemd-networkd[1621]: calif726a13ba75: Gained IPv6LL Jan 14 00:08:33.959883 kubelet[3576]: E0114 00:08:33.959639 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7698878-qxjdm" podUID="9bd87eab-d892-4bfa-b953-a8d30659ec75" Jan 14 00:08:33.960438 kubelet[3576]: E0114 00:08:33.960363 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jrsmb" podUID="485f55bc-0719-47ec-b844-40d9b8f86f0d" Jan 14 00:08:34.386299 systemd-networkd[1621]: cali53d4d5371c0: Gained IPv6LL Jan 14 00:08:34.734288 containerd[2043]: time="2026-01-14T00:08:34.734248406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 00:08:34.961485 kubelet[3576]: E0114 00:08:34.961439 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jrsmb" podUID="485f55bc-0719-47ec-b844-40d9b8f86f0d" Jan 14 00:08:35.013484 containerd[2043]: time="2026-01-14T00:08:35.013134834Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:08:35.017873 containerd[2043]: time="2026-01-14T00:08:35.017828926Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 00:08:35.018002 containerd[2043]: time="2026-01-14T00:08:35.017916753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 00:08:35.018138 kubelet[3576]: E0114 00:08:35.018101 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:08:35.018201 kubelet[3576]: E0114 00:08:35.018147 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:08:35.018235 kubelet[3576]: E0114 00:08:35.018215 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-79587cb578-7tgbl_calico-system(ca6326e0-1651-4ad3-9a4d-868c20d45353): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 00:08:35.018941 containerd[2043]: time="2026-01-14T00:08:35.018878387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 00:08:35.300191 containerd[2043]: time="2026-01-14T00:08:35.300058710Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:08:35.303398 containerd[2043]: time="2026-01-14T00:08:35.303313760Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 00:08:35.303398 containerd[2043]: time="2026-01-14T00:08:35.303345825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 00:08:35.303584 kubelet[3576]: E0114 00:08:35.303541 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:08:35.303622 kubelet[3576]: E0114 00:08:35.303590 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:08:35.303672 kubelet[3576]: E0114 00:08:35.303654 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-79587cb578-7tgbl_calico-system(ca6326e0-1651-4ad3-9a4d-868c20d45353): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 00:08:35.303712 kubelet[3576]: E0114 00:08:35.303689 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79587cb578-7tgbl" podUID="ca6326e0-1651-4ad3-9a4d-868c20d45353" Jan 14 00:08:42.813247 kernel: kauditd_printk_skb: 252 callbacks suppressed Jan 14 00:08:42.813366 kernel: audit: type=1130 audit(1768349322.808:767): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.18:22-10.200.16.10:52072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:08:42.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.18:22-10.200.16.10:52072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:08:42.808820 systemd[1]: Started sshd@7-10.200.20.18:22-10.200.16.10:52072.service - OpenSSH per-connection server daemon (10.200.16.10:52072). Jan 14 00:08:43.256000 audit[5876]: USER_ACCT pid=5876 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:43.274679 sshd[5876]: Accepted publickey for core from 10.200.16.10 port 52072 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:08:43.275867 sshd-session[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:08:43.272000 audit[5876]: CRED_ACQ pid=5876 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:43.278086 kernel: audit: type=1101 audit(1768349323.256:768): pid=5876 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:43.297238 systemd-logind[1997]: New session 11 of user core. Jan 14 00:08:43.301999 kernel: audit: type=1103 audit(1768349323.272:769): pid=5876 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:43.302065 kernel: audit: type=1006 audit(1768349323.272:770): pid=5876 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 14 00:08:43.272000 audit[5876]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcea3ac40 a2=3 a3=0 items=0 ppid=1 pid=5876 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:43.304172 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 00:08:43.318555 kernel: audit: type=1300 audit(1768349323.272:770): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcea3ac40 a2=3 a3=0 items=0 ppid=1 pid=5876 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:43.272000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:08:43.325784 kernel: audit: type=1327 audit(1768349323.272:770): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:08:43.326000 audit[5876]: USER_START pid=5876 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:43.327000 audit[5882]: CRED_ACQ pid=5882 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:43.364130 kernel: audit: type=1105 audit(1768349323.326:771): pid=5876 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:43.364255 kernel: audit: type=1103 audit(1768349323.327:772): pid=5882 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:43.561117 sshd[5882]: Connection closed by 10.200.16.10 port 52072 Jan 14 00:08:43.561216 sshd-session[5876]: pam_unix(sshd:session): session closed for user core Jan 14 00:08:43.563000 audit[5876]: USER_END pid=5876 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:43.567356 systemd[1]: sshd@7-10.200.20.18:22-10.200.16.10:52072.service: Deactivated successfully. Jan 14 00:08:43.570828 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 00:08:43.572242 systemd-logind[1997]: Session 11 logged out. Waiting for processes to exit. Jan 14 00:08:43.574169 systemd-logind[1997]: Removed session 11. Jan 14 00:08:43.563000 audit[5876]: CRED_DISP pid=5876 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:43.602016 kernel: audit: type=1106 audit(1768349323.563:773): pid=5876 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:43.602098 kernel: audit: type=1104 audit(1768349323.563:774): pid=5876 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:43.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.18:22-10.200.16.10:52072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:08:44.733525 containerd[2043]: time="2026-01-14T00:08:44.733481461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:08:45.014911 containerd[2043]: time="2026-01-14T00:08:45.014640049Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:08:45.018843 containerd[2043]: time="2026-01-14T00:08:45.018760606Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:08:45.018927 containerd[2043]: time="2026-01-14T00:08:45.018817280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:08:45.019255 kubelet[3576]: E0114 00:08:45.019081 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:08:45.019255 kubelet[3576]: E0114 00:08:45.019133 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:08:45.019255 kubelet[3576]: E0114 00:08:45.019200 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5df7698878-xcd87_calico-apiserver(703a17f2-f0e0-477c-b942-3e7b76e59fda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:08:45.019255 kubelet[3576]: E0114 00:08:45.019225 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7698878-xcd87" podUID="703a17f2-f0e0-477c-b942-3e7b76e59fda" Jan 14 00:08:45.736053 containerd[2043]: time="2026-01-14T00:08:45.735665130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 00:08:46.030441 containerd[2043]: time="2026-01-14T00:08:46.030168896Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:08:46.033695 containerd[2043]: time="2026-01-14T00:08:46.033605918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 00:08:46.033695 containerd[2043]: time="2026-01-14T00:08:46.033663752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 00:08:46.034054 kubelet[3576]: E0114 00:08:46.033963 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:08:46.034905 kubelet[3576]: E0114 00:08:46.034042 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:08:46.034905 kubelet[3576]: E0114 00:08:46.034534 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-jrsmb_calico-system(485f55bc-0719-47ec-b844-40d9b8f86f0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 00:08:46.035380 containerd[2043]: time="2026-01-14T00:08:46.035327098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 00:08:46.293934 containerd[2043]: time="2026-01-14T00:08:46.293670526Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:08:46.296886 containerd[2043]: time="2026-01-14T00:08:46.296781249Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 00:08:46.296886 containerd[2043]: time="2026-01-14T00:08:46.296838019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 00:08:46.297091 kubelet[3576]: E0114 00:08:46.297046 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:08:46.297146 kubelet[3576]: E0114 00:08:46.297110 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:08:46.297692 containerd[2043]: time="2026-01-14T00:08:46.297422639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 00:08:46.297803 kubelet[3576]: E0114 00:08:46.297569 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7757665449-ss5lr_calico-system(0d1fcd1d-b4e7-438b-8969-12e4764b6063): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 00:08:46.297803 kubelet[3576]: E0114 00:08:46.297607 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7757665449-ss5lr" podUID="0d1fcd1d-b4e7-438b-8969-12e4764b6063" Jan 14 00:08:46.577193 containerd[2043]: time="2026-01-14T00:08:46.577062359Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:08:46.581003 containerd[2043]: time="2026-01-14T00:08:46.580915380Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 00:08:46.581227 containerd[2043]: time="2026-01-14T00:08:46.580962741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 00:08:46.581301 kubelet[3576]: E0114 00:08:46.581259 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:08:46.581339 kubelet[3576]: E0114 00:08:46.581305 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:08:46.581396 kubelet[3576]: E0114 00:08:46.581371 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-jrsmb_calico-system(485f55bc-0719-47ec-b844-40d9b8f86f0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 00:08:46.581437 kubelet[3576]: E0114 00:08:46.581410 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jrsmb" podUID="485f55bc-0719-47ec-b844-40d9b8f86f0d" Jan 14 00:08:46.734229 containerd[2043]: time="2026-01-14T00:08:46.734117327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:08:47.022527 containerd[2043]: time="2026-01-14T00:08:47.022316197Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:08:47.025833 containerd[2043]: time="2026-01-14T00:08:47.025707313Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:08:47.025833 containerd[2043]: time="2026-01-14T00:08:47.025794500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:08:47.026145 kubelet[3576]: E0114 00:08:47.026098 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:08:47.026198 kubelet[3576]: E0114 00:08:47.026148 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:08:47.026324 kubelet[3576]: E0114 00:08:47.026298 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79f767b88f-ntdwm_calico-apiserver(ce38d8d5-b119-4ec5-8427-02101a96fcd0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:08:47.027056 kubelet[3576]: E0114 00:08:47.027011 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79f767b88f-ntdwm" podUID="ce38d8d5-b119-4ec5-8427-02101a96fcd0" Jan 14 00:08:47.027116 containerd[2043]: time="2026-01-14T00:08:47.027013590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:08:47.273805 containerd[2043]: time="2026-01-14T00:08:47.273539918Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:08:47.277493 containerd[2043]: time="2026-01-14T00:08:47.277404162Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:08:47.277493 containerd[2043]: time="2026-01-14T00:08:47.277460180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:08:47.277740 kubelet[3576]: E0114 00:08:47.277699 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:08:47.278427 kubelet[3576]: E0114 00:08:47.277750 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:08:47.278427 kubelet[3576]: E0114 00:08:47.277828 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5df7698878-qxjdm_calico-apiserver(9bd87eab-d892-4bfa-b953-a8d30659ec75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:08:47.278427 kubelet[3576]: E0114 00:08:47.277854 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7698878-qxjdm" podUID="9bd87eab-d892-4bfa-b953-a8d30659ec75" Jan 14 00:08:47.736253 containerd[2043]: time="2026-01-14T00:08:47.736165087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 00:08:48.010841 containerd[2043]: time="2026-01-14T00:08:48.010712177Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:08:48.017259 containerd[2043]: time="2026-01-14T00:08:48.017202496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 00:08:48.017411 containerd[2043]: time="2026-01-14T00:08:48.017301523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 00:08:48.017699 kubelet[3576]: E0114 00:08:48.017584 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:08:48.017699 kubelet[3576]: E0114 00:08:48.017650 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:08:48.018013 kubelet[3576]: E0114 00:08:48.017963 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-zrb5n_calico-system(a80d0249-070a-486c-a74c-948bf824745a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 00:08:48.018304 kubelet[3576]: E0114 00:08:48.018220 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zrb5n" podUID="a80d0249-070a-486c-a74c-948bf824745a" Jan 14 00:08:48.650227 systemd[1]: Started sshd@8-10.200.20.18:22-10.200.16.10:52082.service - OpenSSH per-connection server daemon (10.200.16.10:52082). Jan 14 00:08:48.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.18:22-10.200.16.10:52082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:08:48.653945 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:08:48.654062 kernel: audit: type=1130 audit(1768349328.649:776): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.18:22-10.200.16.10:52082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:08:49.088000 audit[5895]: USER_ACCT pid=5895 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:49.090102 sshd[5895]: Accepted publickey for core from 10.200.16.10 port 52082 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:08:49.107159 sshd-session[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:08:49.105000 audit[5895]: CRED_ACQ pid=5895 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:49.122682 kernel: audit: type=1101 audit(1768349329.088:777): pid=5895 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:49.122768 kernel: audit: type=1103 audit(1768349329.105:778): pid=5895 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:49.128520 systemd-logind[1997]: New session 12 of user core. Jan 14 00:08:49.132345 kernel: audit: type=1006 audit(1768349329.105:779): pid=5895 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 14 00:08:49.105000 audit[5895]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd9675030 a2=3 a3=0 items=0 ppid=1 pid=5895 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:49.150629 kernel: audit: type=1300 audit(1768349329.105:779): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd9675030 a2=3 a3=0 items=0 ppid=1 pid=5895 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:49.105000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:08:49.157966 kernel: audit: type=1327 audit(1768349329.105:779): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:08:49.159236 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 00:08:49.161000 audit[5895]: USER_START pid=5895 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:49.163000 audit[5899]: CRED_ACQ pid=5899 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:49.199062 kernel: audit: type=1105 audit(1768349329.161:780): pid=5895 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:49.199115 kernel: audit: type=1103 audit(1768349329.163:781): pid=5899 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:49.443015 sshd[5899]: Connection closed by 10.200.16.10 port 52082 Jan 14 00:08:49.443262 sshd-session[5895]: pam_unix(sshd:session): session closed for user core Jan 14 00:08:49.443000 audit[5895]: USER_END pid=5895 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:49.443000 audit[5895]: CRED_DISP pid=5895 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:49.467826 systemd[1]: sshd@8-10.200.20.18:22-10.200.16.10:52082.service: Deactivated successfully. Jan 14 00:08:49.471053 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 00:08:49.472701 systemd-logind[1997]: Session 12 logged out. Waiting for processes to exit. Jan 14 00:08:49.475358 systemd-logind[1997]: Removed session 12. Jan 14 00:08:49.479602 kernel: audit: type=1106 audit(1768349329.443:782): pid=5895 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:49.479675 kernel: audit: type=1104 audit(1768349329.443:783): pid=5895 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:49.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.18:22-10.200.16.10:52082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:08:50.734138 kubelet[3576]: E0114 00:08:50.734090 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79587cb578-7tgbl" podUID="ca6326e0-1651-4ad3-9a4d-868c20d45353" Jan 14 00:08:54.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.18:22-10.200.16.10:48006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:08:54.534101 systemd[1]: Started sshd@9-10.200.20.18:22-10.200.16.10:48006.service - OpenSSH per-connection server daemon (10.200.16.10:48006). Jan 14 00:08:54.537507 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:08:54.537583 kernel: audit: type=1130 audit(1768349334.533:785): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.18:22-10.200.16.10:48006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:08:54.977000 audit[5969]: USER_ACCT pid=5969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:54.978675 sshd[5969]: Accepted publickey for core from 10.200.16.10 port 48006 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:08:54.996038 kernel: audit: type=1101 audit(1768349334.977:786): pid=5969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:54.995000 audit[5969]: CRED_ACQ pid=5969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:54.997036 sshd-session[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:08:55.021648 kernel: audit: type=1103 audit(1768349334.995:787): pid=5969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:55.021764 kernel: audit: type=1006 audit(1768349334.995:788): pid=5969 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 14 00:08:54.995000 audit[5969]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff7ac6c00 a2=3 a3=0 items=0 ppid=1 pid=5969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:55.039125 kernel: audit: type=1300 audit(1768349334.995:788): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff7ac6c00 a2=3 a3=0 items=0 ppid=1 pid=5969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:54.995000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:08:55.044860 systemd-logind[1997]: New session 13 of user core. Jan 14 00:08:55.046488 kernel: audit: type=1327 audit(1768349334.995:788): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:08:55.050180 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 00:08:55.053000 audit[5969]: USER_START pid=5969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:55.053000 audit[5973]: CRED_ACQ pid=5973 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:55.092846 kernel: audit: type=1105 audit(1768349335.053:789): pid=5969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:55.092948 kernel: audit: type=1103 audit(1768349335.053:790): pid=5973 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:55.280032 sshd[5973]: Connection closed by 10.200.16.10 port 48006 Jan 14 00:08:55.280450 sshd-session[5969]: pam_unix(sshd:session): session closed for user core Jan 14 00:08:55.280000 audit[5969]: USER_END pid=5969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:55.280000 audit[5969]: CRED_DISP pid=5969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:55.303456 systemd[1]: sshd@9-10.200.20.18:22-10.200.16.10:48006.service: Deactivated successfully. Jan 14 00:08:55.306218 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 00:08:55.317933 kernel: audit: type=1106 audit(1768349335.280:791): pid=5969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:55.318087 kernel: audit: type=1104 audit(1768349335.280:792): pid=5969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:55.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.18:22-10.200.16.10:48006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:08:55.319926 systemd-logind[1997]: Session 13 logged out. Waiting for processes to exit. Jan 14 00:08:55.322723 systemd-logind[1997]: Removed session 13. Jan 14 00:08:55.375379 systemd[1]: Started sshd@10-10.200.20.18:22-10.200.16.10:48022.service - OpenSSH per-connection server daemon (10.200.16.10:48022). Jan 14 00:08:55.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.18:22-10.200.16.10:48022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:08:55.798000 audit[5985]: USER_ACCT pid=5985 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:55.799923 sshd[5985]: Accepted publickey for core from 10.200.16.10 port 48022 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:08:55.800000 audit[5985]: CRED_ACQ pid=5985 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:55.800000 audit[5985]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeea6aad0 a2=3 a3=0 items=0 ppid=1 pid=5985 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:55.800000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:08:55.802068 sshd-session[5985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:08:55.806738 systemd-logind[1997]: New session 14 of user core. Jan 14 00:08:55.814210 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 00:08:55.817000 audit[5985]: USER_START pid=5985 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:55.819000 audit[5989]: CRED_ACQ pid=5989 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:56.107279 sshd[5989]: Connection closed by 10.200.16.10 port 48022 Jan 14 00:08:56.107951 sshd-session[5985]: pam_unix(sshd:session): session closed for user core Jan 14 00:08:56.109000 audit[5985]: USER_END pid=5985 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:56.109000 audit[5985]: CRED_DISP pid=5985 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:56.112551 systemd[1]: sshd@10-10.200.20.18:22-10.200.16.10:48022.service: Deactivated successfully. Jan 14 00:08:56.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.18:22-10.200.16.10:48022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:08:56.115835 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 00:08:56.117118 systemd-logind[1997]: Session 14 logged out. Waiting for processes to exit. Jan 14 00:08:56.119344 systemd-logind[1997]: Removed session 14. Jan 14 00:08:56.195423 systemd[1]: Started sshd@11-10.200.20.18:22-10.200.16.10:48024.service - OpenSSH per-connection server daemon (10.200.16.10:48024). Jan 14 00:08:56.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.18:22-10.200.16.10:48024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:08:56.625000 audit[5998]: USER_ACCT pid=5998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:56.626963 sshd[5998]: Accepted publickey for core from 10.200.16.10 port 48024 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:08:56.626000 audit[5998]: CRED_ACQ pid=5998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:56.626000 audit[5998]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff8a86100 a2=3 a3=0 items=0 ppid=1 pid=5998 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:08:56.626000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:08:56.628080 sshd-session[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:08:56.632216 systemd-logind[1997]: New session 15 of user core. Jan 14 00:08:56.642357 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 00:08:56.643000 audit[5998]: USER_START pid=5998 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:56.645000 audit[6002]: CRED_ACQ pid=6002 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:56.902689 sshd[6002]: Connection closed by 10.200.16.10 port 48024 Jan 14 00:08:56.902385 sshd-session[5998]: pam_unix(sshd:session): session closed for user core Jan 14 00:08:56.903000 audit[5998]: USER_END pid=5998 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:56.904000 audit[5998]: CRED_DISP pid=5998 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:08:56.907969 systemd-logind[1997]: Session 15 logged out. Waiting for processes to exit. Jan 14 00:08:56.908201 systemd[1]: sshd@11-10.200.20.18:22-10.200.16.10:48024.service: Deactivated successfully. Jan 14 00:08:56.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.18:22-10.200.16.10:48024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:08:56.909774 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 00:08:56.911626 systemd-logind[1997]: Removed session 15. Jan 14 00:08:59.734979 kubelet[3576]: E0114 00:08:59.734696 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7757665449-ss5lr" podUID="0d1fcd1d-b4e7-438b-8969-12e4764b6063" Jan 14 00:08:59.734979 kubelet[3576]: E0114 00:08:59.734769 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7698878-xcd87" podUID="703a17f2-f0e0-477c-b942-3e7b76e59fda" Jan 14 00:09:00.735457 kubelet[3576]: E0114 00:09:00.735305 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7698878-qxjdm" podUID="9bd87eab-d892-4bfa-b953-a8d30659ec75" Jan 14 00:09:00.735457 kubelet[3576]: E0114 00:09:00.735410 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zrb5n" podUID="a80d0249-070a-486c-a74c-948bf824745a" Jan 14 00:09:00.738031 kubelet[3576]: E0114 00:09:00.736632 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jrsmb" podUID="485f55bc-0719-47ec-b844-40d9b8f86f0d" Jan 14 00:09:01.736807 kubelet[3576]: E0114 00:09:01.736756 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79f767b88f-ntdwm" podUID="ce38d8d5-b119-4ec5-8427-02101a96fcd0" Jan 14 00:09:01.738452 containerd[2043]: time="2026-01-14T00:09:01.738212417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 00:09:01.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.18:22-10.200.16.10:42640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:01.991921 systemd[1]: Started sshd@12-10.200.20.18:22-10.200.16.10:42640.service - OpenSSH per-connection server daemon (10.200.16.10:42640). Jan 14 00:09:01.995187 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 00:09:01.995259 kernel: audit: type=1130 audit(1768349341.991:812): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.18:22-10.200.16.10:42640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:02.018685 containerd[2043]: time="2026-01-14T00:09:02.018646575Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:09:02.022517 containerd[2043]: time="2026-01-14T00:09:02.022483013Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 00:09:02.022706 containerd[2043]: time="2026-01-14T00:09:02.022612577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 00:09:02.025034 kubelet[3576]: E0114 00:09:02.024499 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:09:02.025034 kubelet[3576]: E0114 00:09:02.024550 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:09:02.025034 kubelet[3576]: E0114 00:09:02.024622 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-79587cb578-7tgbl_calico-system(ca6326e0-1651-4ad3-9a4d-868c20d45353): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 00:09:02.026122 containerd[2043]: time="2026-01-14T00:09:02.026103530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 00:09:02.298256 containerd[2043]: time="2026-01-14T00:09:02.298132978Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:09:02.302148 containerd[2043]: time="2026-01-14T00:09:02.302110989Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 00:09:02.302495 containerd[2043]: time="2026-01-14T00:09:02.302264874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 00:09:02.302574 kubelet[3576]: E0114 00:09:02.302526 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:09:02.302730 kubelet[3576]: E0114 00:09:02.302583 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:09:02.302730 kubelet[3576]: E0114 00:09:02.302652 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-79587cb578-7tgbl_calico-system(ca6326e0-1651-4ad3-9a4d-868c20d45353): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 00:09:02.302730 kubelet[3576]: E0114 00:09:02.302685 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79587cb578-7tgbl" podUID="ca6326e0-1651-4ad3-9a4d-868c20d45353" Jan 14 00:09:02.434000 audit[6030]: USER_ACCT pid=6030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:02.452954 sshd[6030]: Accepted publickey for core from 10.200.16.10 port 42640 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:09:02.454830 sshd-session[6030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:09:02.453000 audit[6030]: CRED_ACQ pid=6030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:02.470903 kernel: audit: type=1101 audit(1768349342.434:813): pid=6030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:02.471552 kernel: audit: type=1103 audit(1768349342.453:814): pid=6030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:02.480370 kernel: audit: type=1006 audit(1768349342.453:815): pid=6030 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 14 00:09:02.453000 audit[6030]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffca1b2dc0 a2=3 a3=0 items=0 ppid=1 pid=6030 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:02.483462 systemd-logind[1997]: New session 16 of user core. Jan 14 00:09:02.498902 kernel: audit: type=1300 audit(1768349342.453:815): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffca1b2dc0 a2=3 a3=0 items=0 ppid=1 pid=6030 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:02.453000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:09:02.501189 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 00:09:02.506266 kernel: audit: type=1327 audit(1768349342.453:815): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:09:02.506000 audit[6030]: USER_START pid=6030 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:02.528000 audit[6034]: CRED_ACQ pid=6034 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:02.544566 kernel: audit: type=1105 audit(1768349342.506:816): pid=6030 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:02.544667 kernel: audit: type=1103 audit(1768349342.528:817): pid=6034 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:02.736046 sshd[6034]: Connection closed by 10.200.16.10 port 42640 Jan 14 00:09:02.736598 sshd-session[6030]: pam_unix(sshd:session): session closed for user core Jan 14 00:09:02.737000 audit[6030]: USER_END pid=6030 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:02.761879 systemd[1]: sshd@12-10.200.20.18:22-10.200.16.10:42640.service: Deactivated successfully. Jan 14 00:09:02.737000 audit[6030]: CRED_DISP pid=6030 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:02.766062 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 00:09:02.767883 systemd-logind[1997]: Session 16 logged out. Waiting for processes to exit. Jan 14 00:09:02.770613 systemd-logind[1997]: Removed session 16. Jan 14 00:09:02.778926 kernel: audit: type=1106 audit(1768349342.737:818): pid=6030 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:02.779017 kernel: audit: type=1104 audit(1768349342.737:819): pid=6030 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:02.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.18:22-10.200.16.10:42640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:07.828818 systemd[1]: Started sshd@13-10.200.20.18:22-10.200.16.10:42646.service - OpenSSH per-connection server daemon (10.200.16.10:42646). Jan 14 00:09:07.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.18:22-10.200.16.10:42646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:07.832742 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:09:07.832804 kernel: audit: type=1130 audit(1768349347.827:821): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.18:22-10.200.16.10:42646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:08.270000 audit[6049]: USER_ACCT pid=6049 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:08.276094 sshd[6049]: Accepted publickey for core from 10.200.16.10 port 42646 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:09:08.289712 sshd-session[6049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:09:08.290512 kernel: audit: type=1101 audit(1768349348.270:822): pid=6049 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:08.290572 kernel: audit: type=1103 audit(1768349348.288:823): pid=6049 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:08.288000 audit[6049]: CRED_ACQ pid=6049 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:08.309098 kernel: audit: type=1006 audit(1768349348.288:824): pid=6049 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 14 00:09:08.312343 systemd-logind[1997]: New session 17 of user core. Jan 14 00:09:08.288000 audit[6049]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffea5162a0 a2=3 a3=0 items=0 ppid=1 pid=6049 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:08.338018 kernel: audit: type=1300 audit(1768349348.288:824): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffea5162a0 a2=3 a3=0 items=0 ppid=1 pid=6049 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:08.341211 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 00:09:08.288000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:09:08.350022 kernel: audit: type=1327 audit(1768349348.288:824): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:09:08.350000 audit[6049]: USER_START pid=6049 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:08.372787 kernel: audit: type=1105 audit(1768349348.350:825): pid=6049 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:08.352000 audit[6053]: CRED_ACQ pid=6053 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:08.390976 kernel: audit: type=1103 audit(1768349348.352:826): pid=6053 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:08.552101 sshd[6053]: Connection closed by 10.200.16.10 port 42646 Jan 14 00:09:08.552886 sshd-session[6049]: pam_unix(sshd:session): session closed for user core Jan 14 00:09:08.553000 audit[6049]: USER_END pid=6049 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:08.557632 systemd[1]: sshd@13-10.200.20.18:22-10.200.16.10:42646.service: Deactivated successfully. Jan 14 00:09:08.559814 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 00:09:08.553000 audit[6049]: CRED_DISP pid=6049 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:08.576673 systemd-logind[1997]: Session 17 logged out. Waiting for processes to exit. Jan 14 00:09:08.591528 kernel: audit: type=1106 audit(1768349348.553:827): pid=6049 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:08.591611 kernel: audit: type=1104 audit(1768349348.553:828): pid=6049 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:08.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.18:22-10.200.16.10:42646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:08.593555 systemd-logind[1997]: Removed session 17. Jan 14 00:09:10.735414 containerd[2043]: time="2026-01-14T00:09:10.733972614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 00:09:10.984841 containerd[2043]: time="2026-01-14T00:09:10.984792665Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:09:10.988229 containerd[2043]: time="2026-01-14T00:09:10.988140582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 00:09:10.988277 containerd[2043]: time="2026-01-14T00:09:10.988227353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 00:09:10.988703 kubelet[3576]: E0114 00:09:10.988657 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:09:10.989192 kubelet[3576]: E0114 00:09:10.989042 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:09:10.989192 kubelet[3576]: E0114 00:09:10.989129 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7757665449-ss5lr_calico-system(0d1fcd1d-b4e7-438b-8969-12e4764b6063): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 00:09:10.989192 kubelet[3576]: E0114 00:09:10.989156 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7757665449-ss5lr" podUID="0d1fcd1d-b4e7-438b-8969-12e4764b6063" Jan 14 00:09:12.735216 kubelet[3576]: E0114 00:09:12.735166 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79587cb578-7tgbl" podUID="ca6326e0-1651-4ad3-9a4d-868c20d45353" Jan 14 00:09:12.736290 containerd[2043]: time="2026-01-14T00:09:12.735229496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 00:09:13.002214 containerd[2043]: time="2026-01-14T00:09:13.002087587Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:09:13.006250 containerd[2043]: time="2026-01-14T00:09:13.006205652Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 00:09:13.006362 containerd[2043]: time="2026-01-14T00:09:13.006292959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 00:09:13.006706 kubelet[3576]: E0114 00:09:13.006516 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:09:13.006706 kubelet[3576]: E0114 00:09:13.006573 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:09:13.006706 kubelet[3576]: E0114 00:09:13.006649 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-zrb5n_calico-system(a80d0249-070a-486c-a74c-948bf824745a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 00:09:13.006706 kubelet[3576]: E0114 00:09:13.006676 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zrb5n" podUID="a80d0249-070a-486c-a74c-948bf824745a" Jan 14 00:09:13.635810 systemd[1]: Started sshd@14-10.200.20.18:22-10.200.16.10:56230.service - OpenSSH per-connection server daemon (10.200.16.10:56230). Jan 14 00:09:13.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.18:22-10.200.16.10:56230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:13.639671 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:09:13.639726 kernel: audit: type=1130 audit(1768349353.634:830): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.18:22-10.200.16.10:56230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:13.737258 containerd[2043]: time="2026-01-14T00:09:13.737053350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:09:14.011285 containerd[2043]: time="2026-01-14T00:09:14.011239971Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:09:14.014574 containerd[2043]: time="2026-01-14T00:09:14.014538574Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:09:14.014779 containerd[2043]: time="2026-01-14T00:09:14.014549703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:09:14.014808 kubelet[3576]: E0114 00:09:14.014769 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:09:14.015073 kubelet[3576]: E0114 00:09:14.014815 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:09:14.015073 kubelet[3576]: E0114 00:09:14.014886 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79f767b88f-ntdwm_calico-apiserver(ce38d8d5-b119-4ec5-8427-02101a96fcd0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:09:14.015073 kubelet[3576]: E0114 00:09:14.014912 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79f767b88f-ntdwm" podUID="ce38d8d5-b119-4ec5-8427-02101a96fcd0" Jan 14 00:09:14.046000 audit[6067]: USER_ACCT pid=6067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:14.066297 sshd[6067]: Accepted publickey for core from 10.200.16.10 port 56230 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:09:14.066000 audit[6067]: CRED_ACQ pid=6067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:14.071321 sshd-session[6067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:09:14.085408 kernel: audit: type=1101 audit(1768349354.046:831): pid=6067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:14.085517 kernel: audit: type=1103 audit(1768349354.066:832): pid=6067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:14.090701 systemd-logind[1997]: New session 18 of user core. Jan 14 00:09:14.097956 kernel: audit: type=1006 audit(1768349354.066:833): pid=6067 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 14 00:09:14.066000 audit[6067]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe5328f10 a2=3 a3=0 items=0 ppid=1 pid=6067 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:14.100182 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 00:09:14.117439 kernel: audit: type=1300 audit(1768349354.066:833): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe5328f10 a2=3 a3=0 items=0 ppid=1 pid=6067 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:14.066000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:09:14.125584 kernel: audit: type=1327 audit(1768349354.066:833): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:09:14.118000 audit[6067]: USER_START pid=6067 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:14.146727 kernel: audit: type=1105 audit(1768349354.118:834): pid=6067 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:14.118000 audit[6071]: CRED_ACQ pid=6071 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:14.162527 kernel: audit: type=1103 audit(1768349354.118:835): pid=6071 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:14.326132 sshd[6071]: Connection closed by 10.200.16.10 port 56230 Jan 14 00:09:14.326804 sshd-session[6067]: pam_unix(sshd:session): session closed for user core Jan 14 00:09:14.327000 audit[6067]: USER_END pid=6067 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:14.331800 systemd[1]: sshd@14-10.200.20.18:22-10.200.16.10:56230.service: Deactivated successfully. Jan 14 00:09:14.335741 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 00:09:14.337558 systemd-logind[1997]: Session 18 logged out. Waiting for processes to exit. Jan 14 00:09:14.339931 systemd-logind[1997]: Removed session 18. Jan 14 00:09:14.327000 audit[6067]: CRED_DISP pid=6067 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:14.365922 kernel: audit: type=1106 audit(1768349354.327:836): pid=6067 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:14.366046 kernel: audit: type=1104 audit(1768349354.327:837): pid=6067 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:14.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.18:22-10.200.16.10:56230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:14.736388 containerd[2043]: time="2026-01-14T00:09:14.735337201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:09:14.992096 containerd[2043]: time="2026-01-14T00:09:14.991921949Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:09:14.995152 containerd[2043]: time="2026-01-14T00:09:14.995087084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:09:14.995312 containerd[2043]: time="2026-01-14T00:09:14.995087340Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:09:14.995532 kubelet[3576]: E0114 00:09:14.995483 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:09:14.995578 kubelet[3576]: E0114 00:09:14.995535 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:09:14.995728 kubelet[3576]: E0114 00:09:14.995708 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5df7698878-xcd87_calico-apiserver(703a17f2-f0e0-477c-b942-3e7b76e59fda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:09:14.995757 kubelet[3576]: E0114 00:09:14.995740 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7698878-xcd87" podUID="703a17f2-f0e0-477c-b942-3e7b76e59fda" Jan 14 00:09:14.996732 containerd[2043]: time="2026-01-14T00:09:14.996701292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:09:15.260585 containerd[2043]: time="2026-01-14T00:09:15.260467669Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:09:15.268377 containerd[2043]: time="2026-01-14T00:09:15.268332064Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:09:15.268461 containerd[2043]: time="2026-01-14T00:09:15.268426892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:09:15.268812 kubelet[3576]: E0114 00:09:15.268586 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:09:15.268812 kubelet[3576]: E0114 00:09:15.268632 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:09:15.268812 kubelet[3576]: E0114 00:09:15.268708 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5df7698878-qxjdm_calico-apiserver(9bd87eab-d892-4bfa-b953-a8d30659ec75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:09:15.268812 kubelet[3576]: E0114 00:09:15.268734 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7698878-qxjdm" podUID="9bd87eab-d892-4bfa-b953-a8d30659ec75" Jan 14 00:09:15.735942 containerd[2043]: time="2026-01-14T00:09:15.735720741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 00:09:15.960157 containerd[2043]: time="2026-01-14T00:09:15.960106018Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:09:15.964040 containerd[2043]: time="2026-01-14T00:09:15.963985322Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 00:09:15.964442 containerd[2043]: time="2026-01-14T00:09:15.964061772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 00:09:15.964487 kubelet[3576]: E0114 00:09:15.964173 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:09:15.964487 kubelet[3576]: E0114 00:09:15.964217 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:09:15.964487 kubelet[3576]: E0114 00:09:15.964281 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-jrsmb_calico-system(485f55bc-0719-47ec-b844-40d9b8f86f0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 00:09:15.966563 containerd[2043]: time="2026-01-14T00:09:15.966517738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 00:09:16.226227 containerd[2043]: time="2026-01-14T00:09:16.226179203Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:09:16.231415 containerd[2043]: time="2026-01-14T00:09:16.231369632Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 00:09:16.231504 containerd[2043]: time="2026-01-14T00:09:16.231454883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 00:09:16.231725 kubelet[3576]: E0114 00:09:16.231680 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:09:16.231871 kubelet[3576]: E0114 00:09:16.231806 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:09:16.231957 kubelet[3576]: E0114 00:09:16.231943 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-jrsmb_calico-system(485f55bc-0719-47ec-b844-40d9b8f86f0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 00:09:16.233187 kubelet[3576]: E0114 00:09:16.232394 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jrsmb" podUID="485f55bc-0719-47ec-b844-40d9b8f86f0d" Jan 14 00:09:19.414779 systemd[1]: Started sshd@15-10.200.20.18:22-10.200.16.10:56242.service - OpenSSH per-connection server daemon (10.200.16.10:56242). Jan 14 00:09:19.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.18:22-10.200.16.10:56242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:19.418497 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:09:19.418546 kernel: audit: type=1130 audit(1768349359.413:839): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.18:22-10.200.16.10:56242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:19.854000 audit[6084]: USER_ACCT pid=6084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:19.872704 sshd[6084]: Accepted publickey for core from 10.200.16.10 port 56242 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:09:19.874112 kernel: audit: type=1101 audit(1768349359.854:840): pid=6084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:19.873000 audit[6084]: CRED_ACQ pid=6084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:19.881917 sshd-session[6084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:09:19.900499 kernel: audit: type=1103 audit(1768349359.873:841): pid=6084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:19.900601 kernel: audit: type=1006 audit(1768349359.879:842): pid=6084 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 14 00:09:19.879000 audit[6084]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd251ae50 a2=3 a3=0 items=0 ppid=1 pid=6084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:19.920717 kernel: audit: type=1300 audit(1768349359.879:842): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd251ae50 a2=3 a3=0 items=0 ppid=1 pid=6084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:19.879000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:09:19.928963 kernel: audit: type=1327 audit(1768349359.879:842): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:09:19.933074 systemd-logind[1997]: New session 19 of user core. Jan 14 00:09:19.936183 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 00:09:19.938000 audit[6084]: USER_START pid=6084 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:19.958000 audit[6088]: CRED_ACQ pid=6088 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:19.976351 kernel: audit: type=1105 audit(1768349359.938:843): pid=6084 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:19.976473 kernel: audit: type=1103 audit(1768349359.958:844): pid=6088 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:20.162106 sshd[6088]: Connection closed by 10.200.16.10 port 56242 Jan 14 00:09:20.163470 sshd-session[6084]: pam_unix(sshd:session): session closed for user core Jan 14 00:09:20.163000 audit[6084]: USER_END pid=6084 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:20.167695 systemd[1]: sshd@15-10.200.20.18:22-10.200.16.10:56242.service: Deactivated successfully. Jan 14 00:09:20.169749 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 00:09:20.188058 systemd-logind[1997]: Session 19 logged out. Waiting for processes to exit. Jan 14 00:09:20.163000 audit[6084]: CRED_DISP pid=6084 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:20.203026 kernel: audit: type=1106 audit(1768349360.163:845): pid=6084 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:20.203142 kernel: audit: type=1104 audit(1768349360.163:846): pid=6084 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:20.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.18:22-10.200.16.10:56242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:20.204687 systemd-logind[1997]: Removed session 19. Jan 14 00:09:20.246500 systemd[1]: Started sshd@16-10.200.20.18:22-10.200.16.10:47962.service - OpenSSH per-connection server daemon (10.200.16.10:47962). Jan 14 00:09:20.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.18:22-10.200.16.10:47962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:20.637000 audit[6125]: USER_ACCT pid=6125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:20.638855 sshd[6125]: Accepted publickey for core from 10.200.16.10 port 47962 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:09:20.638000 audit[6125]: CRED_ACQ pid=6125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:20.639000 audit[6125]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffeec11b0 a2=3 a3=0 items=0 ppid=1 pid=6125 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:20.639000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:09:20.640621 sshd-session[6125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:09:20.644625 systemd-logind[1997]: New session 20 of user core. Jan 14 00:09:20.650368 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 00:09:20.651000 audit[6125]: USER_START pid=6125 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:20.653000 audit[6129]: CRED_ACQ pid=6129 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:21.012488 sshd[6129]: Connection closed by 10.200.16.10 port 47962 Jan 14 00:09:21.012246 sshd-session[6125]: pam_unix(sshd:session): session closed for user core Jan 14 00:09:21.013000 audit[6125]: USER_END pid=6125 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:21.013000 audit[6125]: CRED_DISP pid=6125 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:21.016724 systemd[1]: sshd@16-10.200.20.18:22-10.200.16.10:47962.service: Deactivated successfully. Jan 14 00:09:21.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.18:22-10.200.16.10:47962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:21.018688 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 00:09:21.020165 systemd-logind[1997]: Session 20 logged out. Waiting for processes to exit. Jan 14 00:09:21.022059 systemd-logind[1997]: Removed session 20. Jan 14 00:09:21.105201 systemd[1]: Started sshd@17-10.200.20.18:22-10.200.16.10:47968.service - OpenSSH per-connection server daemon (10.200.16.10:47968). Jan 14 00:09:21.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.18:22-10.200.16.10:47968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:21.527000 audit[6139]: USER_ACCT pid=6139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:21.529214 sshd[6139]: Accepted publickey for core from 10.200.16.10 port 47968 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:09:21.528000 audit[6139]: CRED_ACQ pid=6139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:21.528000 audit[6139]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd2bc2d40 a2=3 a3=0 items=0 ppid=1 pid=6139 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:21.528000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:09:21.530524 sshd-session[6139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:09:21.534636 systemd-logind[1997]: New session 21 of user core. Jan 14 00:09:21.542169 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 00:09:21.544000 audit[6139]: USER_START pid=6139 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:21.547000 audit[6143]: CRED_ACQ pid=6143 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:21.737835 kubelet[3576]: E0114 00:09:21.737560 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7757665449-ss5lr" podUID="0d1fcd1d-b4e7-438b-8969-12e4764b6063" Jan 14 00:09:22.039000 audit[6159]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=6159 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:09:22.039000 audit[6159]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffe6fa4640 a2=0 a3=1 items=0 ppid=3727 pid=6159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:22.039000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:09:22.043000 audit[6159]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=6159 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:09:22.043000 audit[6159]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe6fa4640 a2=0 a3=1 items=0 ppid=3727 pid=6159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:22.043000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:09:22.125918 sshd[6143]: Connection closed by 10.200.16.10 port 47968 Jan 14 00:09:22.126690 sshd-session[6139]: pam_unix(sshd:session): session closed for user core Jan 14 00:09:22.128000 audit[6139]: USER_END pid=6139 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:22.128000 audit[6139]: CRED_DISP pid=6139 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:22.133453 systemd[1]: sshd@17-10.200.20.18:22-10.200.16.10:47968.service: Deactivated successfully. Jan 14 00:09:22.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.18:22-10.200.16.10:47968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:22.136683 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 00:09:22.139260 systemd-logind[1997]: Session 21 logged out. Waiting for processes to exit. Jan 14 00:09:22.140148 systemd-logind[1997]: Removed session 21. Jan 14 00:09:22.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.18:22-10.200.16.10:47982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:22.215080 systemd[1]: Started sshd@18-10.200.20.18:22-10.200.16.10:47982.service - OpenSSH per-connection server daemon (10.200.16.10:47982). Jan 14 00:09:22.634000 audit[6164]: USER_ACCT pid=6164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:22.635676 sshd[6164]: Accepted publickey for core from 10.200.16.10 port 47982 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:09:22.635000 audit[6164]: CRED_ACQ pid=6164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:22.635000 audit[6164]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc4fc1250 a2=3 a3=0 items=0 ppid=1 pid=6164 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:22.635000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:09:22.637065 sshd-session[6164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:09:22.640772 systemd-logind[1997]: New session 22 of user core. Jan 14 00:09:22.647149 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 00:09:22.649000 audit[6164]: USER_START pid=6164 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:22.651000 audit[6168]: CRED_ACQ pid=6168 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:23.054322 sshd[6168]: Connection closed by 10.200.16.10 port 47982 Jan 14 00:09:23.056184 sshd-session[6164]: pam_unix(sshd:session): session closed for user core Jan 14 00:09:23.056000 audit[6164]: USER_END pid=6164 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:23.056000 audit[6175]: NETFILTER_CFG table=filter:144 family=2 entries=38 op=nft_register_rule pid=6175 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:09:23.056000 audit[6175]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffffce85830 a2=0 a3=1 items=0 ppid=3727 pid=6175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:23.056000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:09:23.057000 audit[6164]: CRED_DISP pid=6164 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:23.060271 systemd-logind[1997]: Session 22 logged out. Waiting for processes to exit. Jan 14 00:09:23.060000 audit[6175]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=6175 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:09:23.060000 audit[6175]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffce85830 a2=0 a3=1 items=0 ppid=3727 pid=6175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:23.060000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:09:23.062324 systemd[1]: sshd@18-10.200.20.18:22-10.200.16.10:47982.service: Deactivated successfully. Jan 14 00:09:23.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.18:22-10.200.16.10:47982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:23.065456 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 00:09:23.067324 systemd-logind[1997]: Removed session 22. Jan 14 00:09:23.147385 systemd[1]: Started sshd@19-10.200.20.18:22-10.200.16.10:47990.service - OpenSSH per-connection server daemon (10.200.16.10:47990). Jan 14 00:09:23.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.18:22-10.200.16.10:47990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:23.566000 audit[6180]: USER_ACCT pid=6180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:23.568108 sshd[6180]: Accepted publickey for core from 10.200.16.10 port 47990 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:09:23.568000 audit[6180]: CRED_ACQ pid=6180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:23.568000 audit[6180]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd7354dd0 a2=3 a3=0 items=0 ppid=1 pid=6180 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:23.568000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:09:23.569844 sshd-session[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:09:23.573722 systemd-logind[1997]: New session 23 of user core. Jan 14 00:09:23.581159 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 00:09:23.582000 audit[6180]: USER_START pid=6180 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:23.584000 audit[6184]: CRED_ACQ pid=6184 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:23.843726 sshd[6184]: Connection closed by 10.200.16.10 port 47990 Jan 14 00:09:23.843554 sshd-session[6180]: pam_unix(sshd:session): session closed for user core Jan 14 00:09:23.844000 audit[6180]: USER_END pid=6180 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:23.845000 audit[6180]: CRED_DISP pid=6180 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:23.847862 systemd[1]: sshd@19-10.200.20.18:22-10.200.16.10:47990.service: Deactivated successfully. Jan 14 00:09:23.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.18:22-10.200.16.10:47990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:23.849644 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 00:09:23.851193 systemd-logind[1997]: Session 23 logged out. Waiting for processes to exit. Jan 14 00:09:23.852855 systemd-logind[1997]: Removed session 23. Jan 14 00:09:24.736853 kubelet[3576]: E0114 00:09:24.735920 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79587cb578-7tgbl" podUID="ca6326e0-1651-4ad3-9a4d-868c20d45353" Jan 14 00:09:25.735279 kubelet[3576]: E0114 00:09:25.735067 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7698878-xcd87" podUID="703a17f2-f0e0-477c-b942-3e7b76e59fda" Jan 14 00:09:27.409000 audit[6198]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=6198 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:09:27.413745 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 14 00:09:27.413809 kernel: audit: type=1325 audit(1768349367.409:888): table=filter:146 family=2 entries=26 op=nft_register_rule pid=6198 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:09:27.409000 audit[6198]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff9857ab0 a2=0 a3=1 items=0 ppid=3727 pid=6198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:27.443474 kernel: audit: type=1300 audit(1768349367.409:888): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff9857ab0 a2=0 a3=1 items=0 ppid=3727 pid=6198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:27.409000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:09:27.453183 kernel: audit: type=1327 audit(1768349367.409:888): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:09:27.423000 audit[6198]: NETFILTER_CFG table=nat:147 family=2 entries=104 op=nft_register_chain pid=6198 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:09:27.464174 kernel: audit: type=1325 audit(1768349367.423:889): table=nat:147 family=2 entries=104 op=nft_register_chain pid=6198 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:09:27.423000 audit[6198]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=fffff9857ab0 a2=0 a3=1 items=0 ppid=3727 pid=6198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:27.423000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:09:27.493985 kernel: audit: type=1300 audit(1768349367.423:889): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=fffff9857ab0 a2=0 a3=1 items=0 ppid=3727 pid=6198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:27.494110 kernel: audit: type=1327 audit(1768349367.423:889): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:09:27.735631 kubelet[3576]: E0114 00:09:27.734926 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zrb5n" podUID="a80d0249-070a-486c-a74c-948bf824745a" Jan 14 00:09:28.734424 kubelet[3576]: E0114 00:09:28.734383 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79f767b88f-ntdwm" podUID="ce38d8d5-b119-4ec5-8427-02101a96fcd0" Jan 14 00:09:28.932686 systemd[1]: Started sshd@20-10.200.20.18:22-10.200.16.10:47996.service - OpenSSH per-connection server daemon (10.200.16.10:47996). Jan 14 00:09:28.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.18:22-10.200.16.10:47996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:28.949021 kernel: audit: type=1130 audit(1768349368.932:890): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.18:22-10.200.16.10:47996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:29.371000 audit[6200]: USER_ACCT pid=6200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:29.388570 sshd[6200]: Accepted publickey for core from 10.200.16.10 port 47996 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:09:29.391081 kernel: audit: type=1101 audit(1768349369.371:891): pid=6200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:29.390000 audit[6200]: CRED_ACQ pid=6200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:29.395810 sshd-session[6200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:09:29.418255 kernel: audit: type=1103 audit(1768349369.390:892): pid=6200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:29.418346 kernel: audit: type=1006 audit(1768349369.390:893): pid=6200 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 14 00:09:29.390000 audit[6200]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc54a96d0 a2=3 a3=0 items=0 ppid=1 pid=6200 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:29.390000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:09:29.421611 systemd-logind[1997]: New session 24 of user core. Jan 14 00:09:29.429133 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 00:09:29.431000 audit[6200]: USER_START pid=6200 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:29.432000 audit[6204]: CRED_ACQ pid=6204 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:29.647333 sshd[6204]: Connection closed by 10.200.16.10 port 47996 Jan 14 00:09:29.648144 sshd-session[6200]: pam_unix(sshd:session): session closed for user core Jan 14 00:09:29.648000 audit[6200]: USER_END pid=6200 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:29.648000 audit[6200]: CRED_DISP pid=6200 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:29.652544 systemd[1]: sshd@20-10.200.20.18:22-10.200.16.10:47996.service: Deactivated successfully. Jan 14 00:09:29.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.18:22-10.200.16.10:47996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:29.654547 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 00:09:29.656069 systemd-logind[1997]: Session 24 logged out. Waiting for processes to exit. Jan 14 00:09:29.656865 systemd-logind[1997]: Removed session 24. Jan 14 00:09:30.733692 kubelet[3576]: E0114 00:09:30.733577 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7698878-qxjdm" podUID="9bd87eab-d892-4bfa-b953-a8d30659ec75" Jan 14 00:09:31.735738 kubelet[3576]: E0114 00:09:31.735577 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jrsmb" podUID="485f55bc-0719-47ec-b844-40d9b8f86f0d" Jan 14 00:09:34.752590 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 00:09:34.752752 kernel: audit: type=1130 audit(1768349374.733:899): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.18:22-10.200.16.10:39504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:34.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.18:22-10.200.16.10:39504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:34.734241 systemd[1]: Started sshd@21-10.200.20.18:22-10.200.16.10:39504.service - OpenSSH per-connection server daemon (10.200.16.10:39504). Jan 14 00:09:35.175000 audit[6221]: USER_ACCT pid=6221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:35.176482 sshd[6221]: Accepted publickey for core from 10.200.16.10 port 39504 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:09:35.193242 sshd-session[6221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:09:35.191000 audit[6221]: CRED_ACQ pid=6221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:35.209281 kernel: audit: type=1101 audit(1768349375.175:900): pid=6221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:35.209342 kernel: audit: type=1103 audit(1768349375.191:901): pid=6221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:35.218964 kernel: audit: type=1006 audit(1768349375.191:902): pid=6221 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 14 00:09:35.191000 audit[6221]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe8de3bd0 a2=3 a3=0 items=0 ppid=1 pid=6221 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:35.236158 kernel: audit: type=1300 audit(1768349375.191:902): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe8de3bd0 a2=3 a3=0 items=0 ppid=1 pid=6221 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:35.191000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:09:35.243358 kernel: audit: type=1327 audit(1768349375.191:902): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:09:35.243055 systemd-logind[1997]: New session 25 of user core. Jan 14 00:09:35.246156 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 00:09:35.248000 audit[6221]: USER_START pid=6221 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:35.268000 audit[6225]: CRED_ACQ pid=6225 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:35.282677 kernel: audit: type=1105 audit(1768349375.248:903): pid=6221 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:35.282762 kernel: audit: type=1103 audit(1768349375.268:904): pid=6225 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:35.492865 sshd[6225]: Connection closed by 10.200.16.10 port 39504 Jan 14 00:09:35.493419 sshd-session[6221]: pam_unix(sshd:session): session closed for user core Jan 14 00:09:35.493000 audit[6221]: USER_END pid=6221 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:35.497278 systemd-logind[1997]: Session 25 logged out. Waiting for processes to exit. Jan 14 00:09:35.499038 systemd[1]: sshd@21-10.200.20.18:22-10.200.16.10:39504.service: Deactivated successfully. Jan 14 00:09:35.502286 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 00:09:35.504084 systemd-logind[1997]: Removed session 25. Jan 14 00:09:35.493000 audit[6221]: CRED_DISP pid=6221 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:35.529791 kernel: audit: type=1106 audit(1768349375.493:905): pid=6221 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:35.529874 kernel: audit: type=1104 audit(1768349375.493:906): pid=6221 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:35.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.18:22-10.200.16.10:39504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:35.734935 kubelet[3576]: E0114 00:09:35.734657 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7757665449-ss5lr" podUID="0d1fcd1d-b4e7-438b-8969-12e4764b6063" Jan 14 00:09:38.734236 kubelet[3576]: E0114 00:09:38.734166 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79587cb578-7tgbl" podUID="ca6326e0-1651-4ad3-9a4d-868c20d45353" Jan 14 00:09:39.736507 kubelet[3576]: E0114 00:09:39.736281 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79f767b88f-ntdwm" podUID="ce38d8d5-b119-4ec5-8427-02101a96fcd0" Jan 14 00:09:40.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.18:22-10.200.16.10:46206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:40.585633 systemd[1]: Started sshd@22-10.200.20.18:22-10.200.16.10:46206.service - OpenSSH per-connection server daemon (10.200.16.10:46206). Jan 14 00:09:40.589133 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:09:40.591148 kernel: audit: type=1130 audit(1768349380.584:908): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.18:22-10.200.16.10:46206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:40.733864 kubelet[3576]: E0114 00:09:40.733823 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7698878-xcd87" podUID="703a17f2-f0e0-477c-b942-3e7b76e59fda" Jan 14 00:09:41.024000 audit[6238]: USER_ACCT pid=6238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:41.041171 sshd[6238]: Accepted publickey for core from 10.200.16.10 port 46206 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:09:41.041000 audit[6238]: CRED_ACQ pid=6238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:41.045368 sshd-session[6238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:09:41.050231 systemd-logind[1997]: New session 26 of user core. Jan 14 00:09:41.060079 kernel: audit: type=1101 audit(1768349381.024:909): pid=6238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:41.060150 kernel: audit: type=1103 audit(1768349381.041:910): pid=6238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:41.071171 kernel: audit: type=1006 audit(1768349381.041:911): pid=6238 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 14 00:09:41.041000 audit[6238]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff9d27e80 a2=3 a3=0 items=0 ppid=1 pid=6238 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:41.072193 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 00:09:41.088945 kernel: audit: type=1300 audit(1768349381.041:911): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff9d27e80 a2=3 a3=0 items=0 ppid=1 pid=6238 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:41.041000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:09:41.097856 kernel: audit: type=1327 audit(1768349381.041:911): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:09:41.090000 audit[6238]: USER_START pid=6238 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:41.117204 kernel: audit: type=1105 audit(1768349381.090:912): pid=6238 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:41.091000 audit[6242]: CRED_ACQ pid=6242 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:41.133197 kernel: audit: type=1103 audit(1768349381.091:913): pid=6242 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:41.313541 sshd[6242]: Connection closed by 10.200.16.10 port 46206 Jan 14 00:09:41.316274 sshd-session[6238]: pam_unix(sshd:session): session closed for user core Jan 14 00:09:41.317000 audit[6238]: USER_END pid=6238 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:41.320980 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 00:09:41.323869 systemd-logind[1997]: Session 26 logged out. Waiting for processes to exit. Jan 14 00:09:41.326356 systemd[1]: sshd@22-10.200.20.18:22-10.200.16.10:46206.service: Deactivated successfully. Jan 14 00:09:41.332823 systemd-logind[1997]: Removed session 26. Jan 14 00:09:41.318000 audit[6238]: CRED_DISP pid=6238 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:41.353246 kernel: audit: type=1106 audit(1768349381.317:914): pid=6238 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:41.353317 kernel: audit: type=1104 audit(1768349381.318:915): pid=6238 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:41.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.18:22-10.200.16.10:46206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:41.734702 kubelet[3576]: E0114 00:09:41.734462 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zrb5n" podUID="a80d0249-070a-486c-a74c-948bf824745a" Jan 14 00:09:42.734196 kubelet[3576]: E0114 00:09:42.734147 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7698878-qxjdm" podUID="9bd87eab-d892-4bfa-b953-a8d30659ec75" Jan 14 00:09:44.734869 kubelet[3576]: E0114 00:09:44.734818 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jrsmb" podUID="485f55bc-0719-47ec-b844-40d9b8f86f0d" Jan 14 00:09:46.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.18:22-10.200.16.10:46220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:46.405239 systemd[1]: Started sshd@23-10.200.20.18:22-10.200.16.10:46220.service - OpenSSH per-connection server daemon (10.200.16.10:46220). Jan 14 00:09:46.408492 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:09:46.408719 kernel: audit: type=1130 audit(1768349386.404:917): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.18:22-10.200.16.10:46220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:46.816000 audit[6260]: USER_ACCT pid=6260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:46.833508 sshd[6260]: Accepted publickey for core from 10.200.16.10 port 46220 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:09:46.835338 sshd-session[6260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:09:46.833000 audit[6260]: CRED_ACQ pid=6260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:46.850573 kernel: audit: type=1101 audit(1768349386.816:918): pid=6260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:46.850666 kernel: audit: type=1103 audit(1768349386.833:919): pid=6260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:46.860537 kernel: audit: type=1006 audit(1768349386.833:920): pid=6260 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 14 00:09:46.833000 audit[6260]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd7481c0 a2=3 a3=0 items=0 ppid=1 pid=6260 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:46.864784 systemd-logind[1997]: New session 27 of user core. Jan 14 00:09:46.877295 kernel: audit: type=1300 audit(1768349386.833:920): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd7481c0 a2=3 a3=0 items=0 ppid=1 pid=6260 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:46.833000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:09:46.884364 kernel: audit: type=1327 audit(1768349386.833:920): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:09:46.889227 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 00:09:46.891000 audit[6260]: USER_START pid=6260 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:46.893000 audit[6264]: CRED_ACQ pid=6264 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:46.927777 kernel: audit: type=1105 audit(1768349386.891:921): pid=6260 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:46.928066 kernel: audit: type=1103 audit(1768349386.893:922): pid=6264 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:47.082428 sshd[6264]: Connection closed by 10.200.16.10 port 46220 Jan 14 00:09:47.082954 sshd-session[6260]: pam_unix(sshd:session): session closed for user core Jan 14 00:09:47.083000 audit[6260]: USER_END pid=6260 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:47.088803 systemd[1]: sshd@23-10.200.20.18:22-10.200.16.10:46220.service: Deactivated successfully. Jan 14 00:09:47.091333 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 00:09:47.085000 audit[6260]: CRED_DISP pid=6260 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:47.107041 systemd-logind[1997]: Session 27 logged out. Waiting for processes to exit. Jan 14 00:09:47.122203 kernel: audit: type=1106 audit(1768349387.083:923): pid=6260 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:47.122285 kernel: audit: type=1104 audit(1768349387.085:924): pid=6260 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:47.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.18:22-10.200.16.10:46220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:47.124333 systemd-logind[1997]: Removed session 27. Jan 14 00:09:48.734238 kubelet[3576]: E0114 00:09:48.734187 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7757665449-ss5lr" podUID="0d1fcd1d-b4e7-438b-8969-12e4764b6063" Jan 14 00:09:52.163966 systemd[1]: Started sshd@24-10.200.20.18:22-10.200.16.10:49930.service - OpenSSH per-connection server daemon (10.200.16.10:49930). Jan 14 00:09:52.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.18:22-10.200.16.10:49930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:52.167980 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:09:52.168069 kernel: audit: type=1130 audit(1768349392.163:926): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.18:22-10.200.16.10:49930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:52.582000 audit[6300]: USER_ACCT pid=6300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:52.602218 sshd[6300]: Accepted publickey for core from 10.200.16.10 port 49930 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:09:52.603734 sshd-session[6300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:09:52.601000 audit[6300]: CRED_ACQ pid=6300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:52.620354 kernel: audit: type=1101 audit(1768349392.582:927): pid=6300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:52.620437 kernel: audit: type=1103 audit(1768349392.601:928): pid=6300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:52.626813 systemd-logind[1997]: New session 28 of user core. Jan 14 00:09:52.631771 kernel: audit: type=1006 audit(1768349392.602:929): pid=6300 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 14 00:09:52.602000 audit[6300]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff4b6b650 a2=3 a3=0 items=0 ppid=1 pid=6300 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:52.649308 kernel: audit: type=1300 audit(1768349392.602:929): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff4b6b650 a2=3 a3=0 items=0 ppid=1 pid=6300 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:09:52.602000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:09:52.656311 kernel: audit: type=1327 audit(1768349392.602:929): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:09:52.657167 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 00:09:52.681000 audit[6300]: USER_START pid=6300 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:52.683000 audit[6304]: CRED_ACQ pid=6304 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:52.717269 kernel: audit: type=1105 audit(1768349392.681:930): pid=6300 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:52.717357 kernel: audit: type=1103 audit(1768349392.683:931): pid=6304 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:52.878809 sshd[6304]: Connection closed by 10.200.16.10 port 49930 Jan 14 00:09:52.879434 sshd-session[6300]: pam_unix(sshd:session): session closed for user core Jan 14 00:09:52.880000 audit[6300]: USER_END pid=6300 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:52.884748 systemd-logind[1997]: Session 28 logged out. Waiting for processes to exit. Jan 14 00:09:52.885559 systemd[1]: sshd@24-10.200.20.18:22-10.200.16.10:49930.service: Deactivated successfully. Jan 14 00:09:52.889987 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 00:09:52.895805 systemd-logind[1997]: Removed session 28. Jan 14 00:09:52.880000 audit[6300]: CRED_DISP pid=6300 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:52.920982 kernel: audit: type=1106 audit(1768349392.880:932): pid=6300 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:52.921067 kernel: audit: type=1104 audit(1768349392.880:933): pid=6300 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:09:52.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.18:22-10.200.16.10:49930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:09:53.736779 kubelet[3576]: E0114 00:09:53.736740 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79f767b88f-ntdwm" podUID="ce38d8d5-b119-4ec5-8427-02101a96fcd0" Jan 14 00:09:53.737403 kubelet[3576]: E0114 00:09:53.737374 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7698878-qxjdm" podUID="9bd87eab-d892-4bfa-b953-a8d30659ec75" Jan 14 00:09:53.737728 containerd[2043]: time="2026-01-14T00:09:53.737622326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 00:09:53.995964 containerd[2043]: time="2026-01-14T00:09:53.995681262Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:09:53.998942 containerd[2043]: time="2026-01-14T00:09:53.998906517Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 00:09:53.999064 containerd[2043]: time="2026-01-14T00:09:53.998923797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 00:09:53.999240 kubelet[3576]: E0114 00:09:53.999138 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:09:53.999240 kubelet[3576]: E0114 00:09:53.999188 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:09:53.999455 kubelet[3576]: E0114 00:09:53.999432 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-79587cb578-7tgbl_calico-system(ca6326e0-1651-4ad3-9a4d-868c20d45353): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 00:09:53.999791 containerd[2043]: time="2026-01-14T00:09:53.999766642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 00:09:54.273163 containerd[2043]: time="2026-01-14T00:09:54.273042277Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:09:54.277806 containerd[2043]: time="2026-01-14T00:09:54.277767392Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 00:09:54.278210 containerd[2043]: time="2026-01-14T00:09:54.277863331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 00:09:54.278287 kubelet[3576]: E0114 00:09:54.278021 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:09:54.278287 kubelet[3576]: E0114 00:09:54.278068 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:09:54.278287 kubelet[3576]: E0114 00:09:54.278224 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-zrb5n_calico-system(a80d0249-070a-486c-a74c-948bf824745a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 00:09:54.278287 kubelet[3576]: E0114 00:09:54.278267 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zrb5n" podUID="a80d0249-070a-486c-a74c-948bf824745a" Jan 14 00:09:54.278595 containerd[2043]: time="2026-01-14T00:09:54.278531218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 00:09:54.545950 containerd[2043]: time="2026-01-14T00:09:54.545049781Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:09:54.548965 containerd[2043]: time="2026-01-14T00:09:54.548919266Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 00:09:54.549084 containerd[2043]: time="2026-01-14T00:09:54.549017190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 00:09:54.549423 kubelet[3576]: E0114 00:09:54.549227 3576 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:09:54.549423 kubelet[3576]: E0114 00:09:54.549276 3576 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:09:54.549423 kubelet[3576]: E0114 00:09:54.549356 3576 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-79587cb578-7tgbl_calico-system(ca6326e0-1651-4ad3-9a4d-868c20d45353): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 00:09:54.549423 kubelet[3576]: E0114 00:09:54.549387 3576 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79587cb578-7tgbl" podUID="ca6326e0-1651-4ad3-9a4d-868c20d45353"