Jan 14 00:02:33.320037 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jan 14 00:02:33.320056 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Jan 13 22:00:26 -00 2026 Jan 14 00:02:33.320062 kernel: KASLR enabled Jan 14 00:02:33.320067 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 14 00:02:33.320072 kernel: printk: legacy bootconsole [pl11] enabled Jan 14 00:02:33.320076 kernel: efi: EFI v2.7 by EDK II Jan 14 00:02:33.320081 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Jan 14 00:02:33.320085 kernel: random: crng init done Jan 14 00:02:33.320089 kernel: secureboot: Secure boot disabled Jan 14 00:02:33.320094 kernel: ACPI: Early table checksum verification disabled Jan 14 00:02:33.320098 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Jan 14 00:02:33.320102 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 00:02:33.320106 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 00:02:33.320111 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 14 00:02:33.320117 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 00:02:33.320122 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 00:02:33.320126 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 00:02:33.320132 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 00:02:33.320136 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 00:02:33.320141 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 00:02:33.320145 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 14 00:02:33.320150 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 00:02:33.320154 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 14 00:02:33.320171 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 14 00:02:33.320176 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 00:02:33.320180 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jan 14 00:02:33.320185 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jan 14 00:02:33.320191 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 00:02:33.320195 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 00:02:33.320200 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 00:02:33.320205 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 00:02:33.320209 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 00:02:33.320214 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 00:02:33.320218 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 00:02:33.320223 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 00:02:33.320227 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 00:02:33.320231 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jan 14 00:02:33.320236 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Jan 14 00:02:33.320241 kernel: Zone ranges: Jan 14 00:02:33.320246 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 14 00:02:33.320252 kernel: DMA32 empty Jan 14 00:02:33.320257 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 14 00:02:33.320262 kernel: Device empty Jan 14 00:02:33.320267 kernel: Movable zone start for each node Jan 14 00:02:33.320272 kernel: Early memory node ranges Jan 14 00:02:33.320276 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 14 00:02:33.320281 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Jan 14 00:02:33.320286 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Jan 14 00:02:33.320290 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Jan 14 00:02:33.320295 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Jan 14 00:02:33.320300 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Jan 14 00:02:33.320304 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 14 00:02:33.320310 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 14 00:02:33.320315 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 14 00:02:33.320319 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Jan 14 00:02:33.320324 kernel: psci: probing for conduit method from ACPI. Jan 14 00:02:33.320329 kernel: psci: PSCIv1.3 detected in firmware. Jan 14 00:02:33.320333 kernel: psci: Using standard PSCI v0.2 function IDs Jan 14 00:02:33.320338 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 14 00:02:33.320342 kernel: psci: SMC Calling Convention v1.4 Jan 14 00:02:33.320347 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 14 00:02:33.320352 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 14 00:02:33.320356 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 14 00:02:33.320361 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 14 00:02:33.320367 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 14 00:02:33.320371 kernel: Detected PIPT I-cache on CPU0 Jan 14 00:02:33.320376 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jan 14 00:02:33.320381 kernel: CPU features: detected: GIC system register CPU interface Jan 14 00:02:33.320386 kernel: CPU features: detected: Spectre-v4 Jan 14 00:02:33.320390 kernel: CPU features: detected: Spectre-BHB Jan 14 00:02:33.320395 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 14 00:02:33.320400 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 14 00:02:33.320404 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jan 14 00:02:33.320409 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 14 00:02:33.320414 kernel: alternatives: applying boot alternatives Jan 14 00:02:33.320420 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=3d3f73de8d2693594dfefd279d2c8d77c282a05a4cbc54177503d31784261f6b Jan 14 00:02:33.320425 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 00:02:33.320430 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 00:02:33.320434 kernel: Fallback order for Node 0: 0 Jan 14 00:02:33.320439 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jan 14 00:02:33.320444 kernel: Policy zone: Normal Jan 14 00:02:33.320448 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 00:02:33.320453 kernel: software IO TLB: area num 2. Jan 14 00:02:33.320458 kernel: software IO TLB: mapped [mem 0x0000000037370000-0x000000003b370000] (64MB) Jan 14 00:02:33.320463 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 00:02:33.320468 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 00:02:33.320474 kernel: rcu: RCU event tracing is enabled. Jan 14 00:02:33.320478 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 00:02:33.320483 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 00:02:33.320488 kernel: Tracing variant of Tasks RCU enabled. Jan 14 00:02:33.320493 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 00:02:33.320497 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 00:02:33.320502 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 00:02:33.320507 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 00:02:33.320512 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 14 00:02:33.320516 kernel: GICv3: 960 SPIs implemented Jan 14 00:02:33.320522 kernel: GICv3: 0 Extended SPIs implemented Jan 14 00:02:33.320526 kernel: Root IRQ handler: gic_handle_irq Jan 14 00:02:33.320531 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 14 00:02:33.320536 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jan 14 00:02:33.320540 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 14 00:02:33.320545 kernel: ITS: No ITS available, not enabling LPIs Jan 14 00:02:33.320550 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 00:02:33.320555 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jan 14 00:02:33.320559 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 14 00:02:33.320564 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jan 14 00:02:33.320569 kernel: Console: colour dummy device 80x25 Jan 14 00:02:33.320575 kernel: printk: legacy console [tty1] enabled Jan 14 00:02:33.320580 kernel: ACPI: Core revision 20240827 Jan 14 00:02:33.320585 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jan 14 00:02:33.320590 kernel: pid_max: default: 32768 minimum: 301 Jan 14 00:02:33.320595 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 00:02:33.320600 kernel: landlock: Up and running. Jan 14 00:02:33.320605 kernel: SELinux: Initializing. Jan 14 00:02:33.320611 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 00:02:33.320616 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 00:02:33.320621 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Jan 14 00:02:33.320626 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Jan 14 00:02:33.320634 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 00:02:33.320641 kernel: rcu: Hierarchical SRCU implementation. Jan 14 00:02:33.320646 kernel: rcu: Max phase no-delay instances is 400. Jan 14 00:02:33.320651 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 00:02:33.320656 kernel: Remapping and enabling EFI services. Jan 14 00:02:33.320662 kernel: smp: Bringing up secondary CPUs ... Jan 14 00:02:33.320667 kernel: Detected PIPT I-cache on CPU1 Jan 14 00:02:33.320673 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 14 00:02:33.320678 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jan 14 00:02:33.320684 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 00:02:33.320689 kernel: SMP: Total of 2 processors activated. Jan 14 00:02:33.320694 kernel: CPU: All CPU(s) started at EL1 Jan 14 00:02:33.320699 kernel: CPU features: detected: 32-bit EL0 Support Jan 14 00:02:33.320704 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 14 00:02:33.320710 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 14 00:02:33.320715 kernel: CPU features: detected: Common not Private translations Jan 14 00:02:33.320721 kernel: CPU features: detected: CRC32 instructions Jan 14 00:02:33.320726 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jan 14 00:02:33.320732 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 14 00:02:33.320737 kernel: CPU features: detected: LSE atomic instructions Jan 14 00:02:33.320742 kernel: CPU features: detected: Privileged Access Never Jan 14 00:02:33.320747 kernel: CPU features: detected: Speculation barrier (SB) Jan 14 00:02:33.320752 kernel: CPU features: detected: TLB range maintenance instructions Jan 14 00:02:33.320758 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 14 00:02:33.320764 kernel: CPU features: detected: Scalable Vector Extension Jan 14 00:02:33.320769 kernel: alternatives: applying system-wide alternatives Jan 14 00:02:33.320774 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 14 00:02:33.320779 kernel: SVE: maximum available vector length 16 bytes per vector Jan 14 00:02:33.320784 kernel: SVE: default vector length 16 bytes per vector Jan 14 00:02:33.320789 kernel: Memory: 3979900K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 12480K init, 1038K bss, 193072K reserved, 16384K cma-reserved) Jan 14 00:02:33.320796 kernel: devtmpfs: initialized Jan 14 00:02:33.320801 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 00:02:33.320806 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 00:02:33.320812 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 14 00:02:33.320817 kernel: 0 pages in range for non-PLT usage Jan 14 00:02:33.320822 kernel: 515168 pages in range for PLT usage Jan 14 00:02:33.320827 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 00:02:33.320833 kernel: SMBIOS 3.1.0 present. Jan 14 00:02:33.320838 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Jan 14 00:02:33.320843 kernel: DMI: Memory slots populated: 2/2 Jan 14 00:02:33.320848 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 00:02:33.320854 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 14 00:02:33.320859 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 14 00:02:33.320864 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 14 00:02:33.320869 kernel: audit: initializing netlink subsys (disabled) Jan 14 00:02:33.320876 kernel: audit: type=2000 audit(0.060:1): state=initialized audit_enabled=0 res=1 Jan 14 00:02:33.320881 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 00:02:33.320886 kernel: cpuidle: using governor menu Jan 14 00:02:33.320891 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 14 00:02:33.320896 kernel: ASID allocator initialised with 32768 entries Jan 14 00:02:33.320901 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 00:02:33.320907 kernel: Serial: AMBA PL011 UART driver Jan 14 00:02:33.320913 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 00:02:33.320918 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 00:02:33.320923 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 14 00:02:33.320928 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 14 00:02:33.320933 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 00:02:33.320938 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 00:02:33.320944 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 14 00:02:33.320950 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 14 00:02:33.320955 kernel: ACPI: Added _OSI(Module Device) Jan 14 00:02:33.320960 kernel: ACPI: Added _OSI(Processor Device) Jan 14 00:02:33.320965 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 00:02:33.320970 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 00:02:33.320975 kernel: ACPI: Interpreter enabled Jan 14 00:02:33.320980 kernel: ACPI: Using GIC for interrupt routing Jan 14 00:02:33.320987 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 14 00:02:33.320992 kernel: printk: legacy console [ttyAMA0] enabled Jan 14 00:02:33.320997 kernel: printk: legacy bootconsole [pl11] disabled Jan 14 00:02:33.321002 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 14 00:02:33.321007 kernel: ACPI: CPU0 has been hot-added Jan 14 00:02:33.321012 kernel: ACPI: CPU1 has been hot-added Jan 14 00:02:33.321018 kernel: iommu: Default domain type: Translated Jan 14 00:02:33.321024 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 14 00:02:33.321029 kernel: efivars: Registered efivars operations Jan 14 00:02:33.321034 kernel: vgaarb: loaded Jan 14 00:02:33.321039 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 14 00:02:33.321044 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 00:02:33.321049 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 00:02:33.321054 kernel: pnp: PnP ACPI init Jan 14 00:02:33.321060 kernel: pnp: PnP ACPI: found 0 devices Jan 14 00:02:33.321065 kernel: NET: Registered PF_INET protocol family Jan 14 00:02:33.321071 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 00:02:33.321076 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 14 00:02:33.321081 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 00:02:33.321086 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 00:02:33.321091 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 14 00:02:33.321098 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 14 00:02:33.321103 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 00:02:33.321108 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 00:02:33.321113 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 00:02:33.321118 kernel: PCI: CLS 0 bytes, default 64 Jan 14 00:02:33.321124 kernel: kvm [1]: HYP mode not available Jan 14 00:02:33.321129 kernel: Initialise system trusted keyrings Jan 14 00:02:33.321134 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 14 00:02:33.321140 kernel: Key type asymmetric registered Jan 14 00:02:33.321145 kernel: Asymmetric key parser 'x509' registered Jan 14 00:02:33.321150 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 14 00:02:33.321156 kernel: io scheduler mq-deadline registered Jan 14 00:02:33.321165 kernel: io scheduler kyber registered Jan 14 00:02:33.321170 kernel: io scheduler bfq registered Jan 14 00:02:33.321175 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 00:02:33.321181 kernel: thunder_xcv, ver 1.0 Jan 14 00:02:33.321187 kernel: thunder_bgx, ver 1.0 Jan 14 00:02:33.321192 kernel: nicpf, ver 1.0 Jan 14 00:02:33.321197 kernel: nicvf, ver 1.0 Jan 14 00:02:33.321333 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 14 00:02:33.321403 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-14T00:02:29 UTC (1768348949) Jan 14 00:02:33.321412 kernel: efifb: probing for efifb Jan 14 00:02:33.321417 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 00:02:33.321423 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 00:02:33.321428 kernel: efifb: scrolling: redraw Jan 14 00:02:33.321433 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 00:02:33.321438 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 00:02:33.321443 kernel: fb0: EFI VGA frame buffer device Jan 14 00:02:33.321449 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 14 00:02:33.321454 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 00:02:33.321460 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 14 00:02:33.321465 kernel: watchdog: NMI not fully supported Jan 14 00:02:33.321470 kernel: NET: Registered PF_INET6 protocol family Jan 14 00:02:33.321475 kernel: watchdog: Hard watchdog permanently disabled Jan 14 00:02:33.321481 kernel: Segment Routing with IPv6 Jan 14 00:02:33.321487 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 00:02:33.321492 kernel: NET: Registered PF_PACKET protocol family Jan 14 00:02:33.321497 kernel: Key type dns_resolver registered Jan 14 00:02:33.321502 kernel: registered taskstats version 1 Jan 14 00:02:33.321507 kernel: Loading compiled-in X.509 certificates Jan 14 00:02:33.321513 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: d16d100cda59d8093883df975a5384fda36b7d35' Jan 14 00:02:33.321518 kernel: Demotion targets for Node 0: null Jan 14 00:02:33.321524 kernel: Key type .fscrypt registered Jan 14 00:02:33.321529 kernel: Key type fscrypt-provisioning registered Jan 14 00:02:33.321534 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 00:02:33.321540 kernel: ima: Allocated hash algorithm: sha1 Jan 14 00:02:33.321545 kernel: ima: No architecture policies found Jan 14 00:02:33.321550 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 14 00:02:33.321555 kernel: clk: Disabling unused clocks Jan 14 00:02:33.321560 kernel: PM: genpd: Disabling unused power domains Jan 14 00:02:33.321566 kernel: Freeing unused kernel memory: 12480K Jan 14 00:02:33.321571 kernel: Run /init as init process Jan 14 00:02:33.321576 kernel: with arguments: Jan 14 00:02:33.321581 kernel: /init Jan 14 00:02:33.321586 kernel: with environment: Jan 14 00:02:33.321591 kernel: HOME=/ Jan 14 00:02:33.321597 kernel: TERM=linux Jan 14 00:02:33.321603 kernel: hv_vmbus: Vmbus version:5.3 Jan 14 00:02:33.321608 kernel: SCSI subsystem initialized Jan 14 00:02:33.321613 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 00:02:33.321618 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 14 00:02:33.321703 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 00:02:33.321710 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 00:02:33.321717 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 14 00:02:33.321722 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 00:02:33.321728 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 00:02:33.321733 kernel: PTP clock support registered Jan 14 00:02:33.321738 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 00:02:33.321743 kernel: hv_vmbus: registering driver hv_utils Jan 14 00:02:33.321749 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 00:02:33.321755 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 00:02:33.321760 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 00:02:33.321765 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 00:02:33.321888 kernel: scsi host0: storvsc_host_t Jan 14 00:02:33.321967 kernel: scsi host1: storvsc_host_t Jan 14 00:02:33.322056 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 00:02:33.322142 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 14 00:02:33.322296 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 00:02:33.322373 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 14 00:02:33.322447 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 14 00:02:33.322520 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 00:02:33.322595 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 00:02:33.322680 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#125 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 14 00:02:33.322748 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#68 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 14 00:02:33.322755 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 00:02:33.322835 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 14 00:02:33.322909 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 00:02:33.322917 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 00:02:33.322989 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 00:02:33.322996 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 00:02:33.323001 kernel: device-mapper: uevent: version 1.0.3 Jan 14 00:02:33.323006 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 00:02:33.323012 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 14 00:02:33.323017 kernel: raid6: neonx8 gen() 18539 MB/s Jan 14 00:02:33.323023 kernel: raid6: neonx4 gen() 18575 MB/s Jan 14 00:02:33.323029 kernel: raid6: neonx2 gen() 17080 MB/s Jan 14 00:02:33.323034 kernel: raid6: neonx1 gen() 15051 MB/s Jan 14 00:02:33.323039 kernel: raid6: int64x8 gen() 10555 MB/s Jan 14 00:02:33.323044 kernel: raid6: int64x4 gen() 10614 MB/s Jan 14 00:02:33.323050 kernel: raid6: int64x2 gen() 8964 MB/s Jan 14 00:02:33.323055 kernel: raid6: int64x1 gen() 7010 MB/s Jan 14 00:02:33.323061 kernel: raid6: using algorithm neonx4 gen() 18575 MB/s Jan 14 00:02:33.323067 kernel: raid6: .... xor() 15123 MB/s, rmw enabled Jan 14 00:02:33.323072 kernel: raid6: using neon recovery algorithm Jan 14 00:02:33.323077 kernel: xor: measuring software checksum speed Jan 14 00:02:33.323082 kernel: 8regs : 28559 MB/sec Jan 14 00:02:33.323087 kernel: 32regs : 28818 MB/sec Jan 14 00:02:33.323092 kernel: arm64_neon : 37381 MB/sec Jan 14 00:02:33.323098 kernel: xor: using function: arm64_neon (37381 MB/sec) Jan 14 00:02:33.323104 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 00:02:33.323109 kernel: BTRFS: device fsid 68b1ce8e-a637-4e91-acf8-5a2e05e289e5 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (444) Jan 14 00:02:33.323114 kernel: BTRFS info (device dm-0): first mount of filesystem 68b1ce8e-a637-4e91-acf8-5a2e05e289e5 Jan 14 00:02:33.323120 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 14 00:02:33.323125 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 00:02:33.323130 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 00:02:33.323136 kernel: loop: module loaded Jan 14 00:02:33.323142 kernel: loop0: detected capacity change from 0 to 91832 Jan 14 00:02:33.323147 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 00:02:33.323153 systemd[1]: Successfully made /usr/ read-only. Jan 14 00:02:33.323169 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 00:02:33.323175 systemd[1]: Detected virtualization microsoft. Jan 14 00:02:33.323182 systemd[1]: Detected architecture arm64. Jan 14 00:02:33.323187 systemd[1]: Running in initrd. Jan 14 00:02:33.323193 systemd[1]: No hostname configured, using default hostname. Jan 14 00:02:33.323199 systemd[1]: Hostname set to . Jan 14 00:02:33.323205 systemd[1]: Initializing machine ID from random generator. Jan 14 00:02:33.323210 systemd[1]: Queued start job for default target initrd.target. Jan 14 00:02:33.323216 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 00:02:33.323223 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 00:02:33.323229 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 00:02:33.323235 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 00:02:33.323241 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 00:02:33.323247 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 00:02:33.323253 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 00:02:33.323260 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 00:02:33.323265 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 00:02:33.323271 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 00:02:33.323277 systemd[1]: Reached target paths.target - Path Units. Jan 14 00:02:33.323282 systemd[1]: Reached target slices.target - Slice Units. Jan 14 00:02:33.323288 systemd[1]: Reached target swap.target - Swaps. Jan 14 00:02:33.323294 systemd[1]: Reached target timers.target - Timer Units. Jan 14 00:02:33.323300 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 00:02:33.323306 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 00:02:33.323311 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 00:02:33.323317 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 00:02:33.323323 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 00:02:33.323329 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 00:02:33.323339 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 00:02:33.323346 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 00:02:33.323352 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 00:02:33.323358 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 00:02:33.323363 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 00:02:33.323370 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 00:02:33.323376 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 00:02:33.323382 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 00:02:33.323388 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 00:02:33.323394 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 00:02:33.323400 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 00:02:33.323407 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:02:33.323428 systemd-journald[582]: Collecting audit messages is enabled. Jan 14 00:02:33.323445 systemd-journald[582]: Journal started Jan 14 00:02:33.323458 systemd-journald[582]: Runtime Journal (/run/log/journal/fa751f0853b64669ba69ccccc1ed9773) is 8M, max 78.3M, 70.3M free. Jan 14 00:02:33.337530 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 00:02:33.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:33.338738 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 00:02:33.375871 kernel: audit: type=1130 audit(1768348953.337:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:33.375906 kernel: audit: type=1130 audit(1768348953.357:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:33.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:33.358532 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 00:02:33.402187 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 00:02:33.402254 kernel: audit: type=1130 audit(1768348953.393:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:33.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:33.394383 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 00:02:33.423725 kernel: audit: type=1130 audit(1768348953.409:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:33.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:33.424403 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 00:02:33.442513 kernel: Bridge firewalling registered Jan 14 00:02:33.434019 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 00:02:33.439504 systemd-modules-load[585]: Inserted module 'br_netfilter' Jan 14 00:02:33.455674 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 00:02:33.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:33.479260 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:02:33.501282 kernel: audit: type=1130 audit(1768348953.461:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:33.501318 kernel: audit: type=1130 audit(1768348953.483:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:33.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:33.486209 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 00:02:33.504977 systemd-tmpfiles[594]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 00:02:33.520926 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 00:02:33.530873 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 00:02:33.554593 kernel: audit: type=1130 audit(1768348953.535:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:33.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:33.556137 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 00:02:33.587317 kernel: audit: type=1130 audit(1768348953.560:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:33.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:33.576364 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 00:02:33.593491 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 00:02:33.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:33.614216 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 00:02:33.628530 kernel: audit: type=1130 audit(1768348953.597:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:33.612000 audit: BPF prog-id=6 op=LOAD Jan 14 00:02:33.630305 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 00:02:33.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:33.637196 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 00:02:33.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:33.649544 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 00:02:33.712868 dracut-cmdline[622]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=3d3f73de8d2693594dfefd279d2c8d77c282a05a4cbc54177503d31784261f6b Jan 14 00:02:33.751859 systemd-resolved[616]: Positive Trust Anchors: Jan 14 00:02:33.751873 systemd-resolved[616]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 00:02:33.751875 systemd-resolved[616]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 00:02:33.751918 systemd-resolved[616]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 00:02:33.773512 systemd-resolved[616]: Defaulting to hostname 'linux'. Jan 14 00:02:33.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:33.795827 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 00:02:33.806659 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 00:02:33.881195 kernel: Loading iSCSI transport class v2.0-870. Jan 14 00:02:33.919350 kernel: iscsi: registered transport (tcp) Jan 14 00:02:33.949324 kernel: iscsi: registered transport (qla4xxx) Jan 14 00:02:33.949393 kernel: QLogic iSCSI HBA Driver Jan 14 00:02:33.994002 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 00:02:34.016409 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 00:02:34.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:34.029133 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 00:02:34.075438 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 00:02:34.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:34.081772 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 00:02:34.110897 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 00:02:34.129837 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 00:02:34.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:34.138000 audit: BPF prog-id=7 op=LOAD Jan 14 00:02:34.139000 audit: BPF prog-id=8 op=LOAD Jan 14 00:02:34.141721 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 00:02:34.234197 systemd-udevd[839]: Using default interface naming scheme 'v257'. Jan 14 00:02:34.240014 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 00:02:34.272127 kernel: kauditd_printk_skb: 9 callbacks suppressed Jan 14 00:02:34.272152 kernel: audit: type=1130 audit(1768348954.251:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:34.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:34.253993 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 00:02:34.284363 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 00:02:34.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:34.294000 audit: BPF prog-id=9 op=LOAD Jan 14 00:02:34.295673 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 00:02:34.320697 kernel: audit: type=1130 audit(1768348954.289:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:34.320719 kernel: audit: type=1334 audit(1768348954.294:22): prog-id=9 op=LOAD Jan 14 00:02:34.323626 dracut-pre-trigger[962]: rd.md=0: removing MD RAID activation Jan 14 00:02:34.351079 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 00:02:34.355139 systemd-networkd[965]: lo: Link UP Jan 14 00:02:34.355142 systemd-networkd[965]: lo: Gained carrier Jan 14 00:02:34.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:34.364471 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 00:02:34.385339 kernel: audit: type=1130 audit(1768348954.363:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:34.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:34.386061 systemd[1]: Reached target network.target - Network. Jan 14 00:02:34.409402 kernel: audit: type=1130 audit(1768348954.384:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:34.400756 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 00:02:34.454034 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 00:02:34.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:34.478343 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 00:02:34.488416 kernel: audit: type=1130 audit(1768348954.463:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:34.548181 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#106 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 14 00:02:34.570594 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 00:02:34.575386 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 00:02:34.572988 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:02:34.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:34.580328 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:02:34.612255 kernel: audit: type=1131 audit(1768348954.579:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:34.606116 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:02:34.620677 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 00:02:34.622191 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:02:34.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:34.631836 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:02:34.668774 kernel: audit: type=1130 audit(1768348954.630:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:34.668794 kernel: audit: type=1131 audit(1768348954.630:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:34.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:34.680604 kernel: hv_netvsc 002248b9-d64a-0022-48b9-d64a002248b9 eth0: VF slot 1 added Jan 14 00:02:34.687684 systemd-networkd[965]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:02:34.687692 systemd-networkd[965]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 00:02:34.697688 systemd-networkd[965]: eth0: Link UP Jan 14 00:02:34.719015 kernel: hv_vmbus: registering driver hv_pci Jan 14 00:02:34.697973 systemd-networkd[965]: eth0: Gained carrier Jan 14 00:02:34.738518 kernel: hv_pci 2aba0a6e-c16c-4375-913d-5fa12aaa8476: PCI VMBus probing: Using version 0x10004 Jan 14 00:02:34.738745 kernel: audit: type=1130 audit(1768348954.723:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:34.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:34.697986 systemd-networkd[965]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:02:34.714045 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:02:34.738999 systemd-networkd[965]: eth0: DHCPv4 address 10.200.20.29/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 14 00:02:34.783066 kernel: hv_pci 2aba0a6e-c16c-4375-913d-5fa12aaa8476: PCI host bridge to bus c16c:00 Jan 14 00:02:34.783380 kernel: pci_bus c16c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 14 00:02:34.783491 kernel: pci_bus c16c:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 00:02:34.783566 kernel: pci c16c:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jan 14 00:02:34.789176 kernel: pci c16c:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 14 00:02:34.794183 kernel: pci c16c:00:02.0: enabling Extended Tags Jan 14 00:02:34.810214 kernel: pci c16c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c16c:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jan 14 00:02:34.820457 kernel: pci_bus c16c:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 00:02:34.820694 kernel: pci c16c:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jan 14 00:02:34.950683 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 00:02:34.967323 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 00:02:34.994984 kernel: mlx5_core c16c:00:02.0: enabling device (0000 -> 0002) Jan 14 00:02:34.996676 kernel: mlx5_core c16c:00:02.0: PTM is not supported by PCIe Jan 14 00:02:35.003365 kernel: mlx5_core c16c:00:02.0: firmware version: 16.30.5026 Jan 14 00:02:35.048699 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 00:02:35.095021 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 00:02:35.136323 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 00:02:35.195478 kernel: hv_netvsc 002248b9-d64a-0022-48b9-d64a002248b9 eth0: VF registering: eth1 Jan 14 00:02:35.195718 kernel: mlx5_core c16c:00:02.0 eth1: joined to eth0 Jan 14 00:02:35.196714 kernel: mlx5_core c16c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 14 00:02:35.210111 systemd-networkd[965]: eth1: Interface name change detected, renamed to enP49516s1. Jan 14 00:02:35.215092 kernel: mlx5_core c16c:00:02.0 enP49516s1: renamed from eth1 Jan 14 00:02:35.339187 kernel: mlx5_core c16c:00:02.0 enP49516s1: Link up Jan 14 00:02:35.372230 kernel: hv_netvsc 002248b9-d64a-0022-48b9-d64a002248b9 eth0: Data path switched to VF: enP49516s1 Jan 14 00:02:35.371968 systemd-networkd[965]: enP49516s1: Link UP Jan 14 00:02:35.502215 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 00:02:35.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:35.507005 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 00:02:35.515828 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 00:02:35.525661 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 00:02:35.535638 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 00:02:35.566006 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 00:02:35.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:35.703388 systemd-networkd[965]: enP49516s1: Gained carrier Jan 14 00:02:36.237142 disk-uuid[1072]: Warning: The kernel is still using the old partition table. Jan 14 00:02:36.237142 disk-uuid[1072]: The new table will be used at the next reboot or after you Jan 14 00:02:36.237142 disk-uuid[1072]: run partprobe(8) or kpartx(8) Jan 14 00:02:36.237142 disk-uuid[1072]: The operation has completed successfully. Jan 14 00:02:36.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:36.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:36.246454 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 00:02:36.246577 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 00:02:36.256785 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 00:02:36.320462 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1233) Jan 14 00:02:36.320522 kernel: BTRFS info (device sda6): first mount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 14 00:02:36.324934 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 00:02:36.348921 kernel: BTRFS info (device sda6): turning on async discard Jan 14 00:02:36.348986 kernel: BTRFS info (device sda6): enabling free space tree Jan 14 00:02:36.358232 kernel: BTRFS info (device sda6): last unmount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 14 00:02:36.360220 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 00:02:36.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:36.369621 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 00:02:36.551354 systemd-networkd[965]: eth0: Gained IPv6LL Jan 14 00:02:37.252472 ignition[1252]: Ignition 2.24.0 Jan 14 00:02:37.252487 ignition[1252]: Stage: fetch-offline Jan 14 00:02:37.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:37.255722 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 00:02:37.252610 ignition[1252]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:02:37.263568 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 00:02:37.254220 ignition[1252]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 00:02:37.254341 ignition[1252]: parsed url from cmdline: "" Jan 14 00:02:37.254344 ignition[1252]: no config URL provided Jan 14 00:02:37.254405 ignition[1252]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 00:02:37.254418 ignition[1252]: no config at "/usr/lib/ignition/user.ign" Jan 14 00:02:37.254425 ignition[1252]: failed to fetch config: resource requires networking Jan 14 00:02:37.254598 ignition[1252]: Ignition finished successfully Jan 14 00:02:37.290348 ignition[1258]: Ignition 2.24.0 Jan 14 00:02:37.290354 ignition[1258]: Stage: fetch Jan 14 00:02:37.290570 ignition[1258]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:02:37.290577 ignition[1258]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 00:02:37.290663 ignition[1258]: parsed url from cmdline: "" Jan 14 00:02:37.290665 ignition[1258]: no config URL provided Jan 14 00:02:37.290668 ignition[1258]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 00:02:37.290673 ignition[1258]: no config at "/usr/lib/ignition/user.ign" Jan 14 00:02:37.290687 ignition[1258]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 00:02:37.392844 ignition[1258]: GET result: OK Jan 14 00:02:37.392904 ignition[1258]: config has been read from IMDS userdata Jan 14 00:02:37.392920 ignition[1258]: parsing config with SHA512: 0c1a08536dc61de234348d1ca4713a1f79a4de47ea050461a10335e87b5daf5e8a02a1addce9b9185533e84cf623940bccf1c9fbd42024d104034a81e6841a50 Jan 14 00:02:37.400852 unknown[1258]: fetched base config from "system" Jan 14 00:02:37.400859 unknown[1258]: fetched base config from "system" Jan 14 00:02:37.401118 ignition[1258]: fetch: fetch complete Jan 14 00:02:37.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:37.400870 unknown[1258]: fetched user config from "azure" Jan 14 00:02:37.401122 ignition[1258]: fetch: fetch passed Jan 14 00:02:37.405703 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 00:02:37.401178 ignition[1258]: Ignition finished successfully Jan 14 00:02:37.413630 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 00:02:37.448131 ignition[1264]: Ignition 2.24.0 Jan 14 00:02:37.448149 ignition[1264]: Stage: kargs Jan 14 00:02:37.448378 ignition[1264]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:02:37.455225 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 00:02:37.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:37.448386 ignition[1264]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 00:02:37.463888 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 00:02:37.449009 ignition[1264]: kargs: kargs passed Jan 14 00:02:37.449055 ignition[1264]: Ignition finished successfully Jan 14 00:02:37.496502 ignition[1271]: Ignition 2.24.0 Jan 14 00:02:37.496519 ignition[1271]: Stage: disks Jan 14 00:02:37.502428 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 00:02:37.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:37.496733 ignition[1271]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:02:37.507229 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 00:02:37.496741 ignition[1271]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 00:02:37.515971 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 00:02:37.497383 ignition[1271]: disks: disks passed Jan 14 00:02:37.525020 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 00:02:37.497431 ignition[1271]: Ignition finished successfully Jan 14 00:02:37.534015 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 00:02:37.542982 systemd[1]: Reached target basic.target - Basic System. Jan 14 00:02:37.552892 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 00:02:37.656311 systemd-fsck[1279]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Jan 14 00:02:37.664257 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 00:02:37.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:37.671412 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 00:02:37.960184 kernel: EXT4-fs (sda9): mounted filesystem db887ae3-d64c-46de-9f1e-de51a801ae44 r/w with ordered data mode. Quota mode: none. Jan 14 00:02:37.960620 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 00:02:37.964536 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 00:02:38.000887 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 00:02:38.014767 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 00:02:38.027832 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 00:02:38.049763 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1293) Jan 14 00:02:38.049806 kernel: BTRFS info (device sda6): first mount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 14 00:02:38.049793 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 00:02:38.063241 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 00:02:38.049841 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 00:02:38.074236 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 00:02:38.082339 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 00:02:38.102851 kernel: BTRFS info (device sda6): turning on async discard Jan 14 00:02:38.102896 kernel: BTRFS info (device sda6): enabling free space tree Jan 14 00:02:38.104110 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 00:02:38.610700 coreos-metadata[1295]: Jan 14 00:02:38.610 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 00:02:38.617239 coreos-metadata[1295]: Jan 14 00:02:38.617 INFO Fetch successful Jan 14 00:02:38.617239 coreos-metadata[1295]: Jan 14 00:02:38.617 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 00:02:38.629867 coreos-metadata[1295]: Jan 14 00:02:38.629 INFO Fetch successful Jan 14 00:02:38.634418 coreos-metadata[1295]: Jan 14 00:02:38.629 INFO wrote hostname ci-4547.0.0-n-d5ef04779b to /sysroot/etc/hostname Jan 14 00:02:38.634998 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 00:02:38.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:39.539341 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 00:02:39.551566 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 14 00:02:39.551585 kernel: audit: type=1130 audit(1768348959.544:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:39.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:39.545085 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 00:02:39.575298 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 00:02:39.601031 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 00:02:39.611129 kernel: BTRFS info (device sda6): last unmount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 14 00:02:39.623213 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 00:02:39.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:39.643905 ignition[1402]: INFO : Ignition 2.24.0 Jan 14 00:02:39.643905 ignition[1402]: INFO : Stage: mount Jan 14 00:02:39.643905 ignition[1402]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 00:02:39.643905 ignition[1402]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 00:02:39.643905 ignition[1402]: INFO : mount: mount passed Jan 14 00:02:39.643905 ignition[1402]: INFO : Ignition finished successfully Jan 14 00:02:39.688334 kernel: audit: type=1130 audit(1768348959.630:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:39.688360 kernel: audit: type=1130 audit(1768348959.651:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:39.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:39.644244 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 00:02:39.672290 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 00:02:39.700317 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 00:02:39.724180 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1412) Jan 14 00:02:39.734902 kernel: BTRFS info (device sda6): first mount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 14 00:02:39.734953 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 00:02:39.744446 kernel: BTRFS info (device sda6): turning on async discard Jan 14 00:02:39.744506 kernel: BTRFS info (device sda6): enabling free space tree Jan 14 00:02:39.745977 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 00:02:39.773517 ignition[1429]: INFO : Ignition 2.24.0 Jan 14 00:02:39.773517 ignition[1429]: INFO : Stage: files Jan 14 00:02:39.780043 ignition[1429]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 00:02:39.780043 ignition[1429]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 00:02:39.780043 ignition[1429]: DEBUG : files: compiled without relabeling support, skipping Jan 14 00:02:39.780043 ignition[1429]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 00:02:39.780043 ignition[1429]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 00:02:39.829180 ignition[1429]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 00:02:39.834955 ignition[1429]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 00:02:39.840596 ignition[1429]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 00:02:39.835218 unknown[1429]: wrote ssh authorized keys file for user: core Jan 14 00:02:39.915200 ignition[1429]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 14 00:02:39.923270 ignition[1429]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 14 00:02:40.071974 ignition[1429]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 00:02:40.499833 ignition[1429]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 14 00:02:40.508454 ignition[1429]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 00:02:40.508454 ignition[1429]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 00:02:40.508454 ignition[1429]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 00:02:40.508454 ignition[1429]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 00:02:40.508454 ignition[1429]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 00:02:40.508454 ignition[1429]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 00:02:40.508454 ignition[1429]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 00:02:40.508454 ignition[1429]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 00:02:40.568310 ignition[1429]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 00:02:40.568310 ignition[1429]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 00:02:40.568310 ignition[1429]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 14 00:02:40.568310 ignition[1429]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 14 00:02:40.568310 ignition[1429]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 14 00:02:40.568310 ignition[1429]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 14 00:02:41.049592 ignition[1429]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 00:02:41.314075 ignition[1429]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 14 00:02:41.314075 ignition[1429]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 00:02:41.356346 ignition[1429]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 00:02:41.366247 ignition[1429]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 00:02:41.366247 ignition[1429]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 00:02:41.366247 ignition[1429]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 14 00:02:41.366247 ignition[1429]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 00:02:41.366247 ignition[1429]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 00:02:41.366247 ignition[1429]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 00:02:41.366247 ignition[1429]: INFO : files: files passed Jan 14 00:02:41.366247 ignition[1429]: INFO : Ignition finished successfully Jan 14 00:02:41.441107 kernel: audit: type=1130 audit(1768348961.378:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.367242 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 00:02:41.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.381318 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 00:02:41.479361 kernel: audit: type=1130 audit(1768348961.445:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.479387 kernel: audit: type=1131 audit(1768348961.445:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.422064 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 00:02:41.436863 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 00:02:41.436963 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 00:02:41.496203 initrd-setup-root-after-ignition[1461]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 00:02:41.496203 initrd-setup-root-after-ignition[1461]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 00:02:41.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.528957 initrd-setup-root-after-ignition[1465]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 00:02:41.539858 kernel: audit: type=1130 audit(1768348961.508:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.500710 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 00:02:41.509528 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 00:02:41.540348 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 00:02:41.592065 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 00:02:41.592205 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 00:02:41.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.617687 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 00:02:41.632623 kernel: audit: type=1130 audit(1768348961.600:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.632644 kernel: audit: type=1131 audit(1768348961.600:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.637030 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 00:02:41.641462 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 00:02:41.642319 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 00:02:41.683216 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 00:02:41.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.705535 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 00:02:41.716415 kernel: audit: type=1130 audit(1768348961.688:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.728714 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 00:02:41.728870 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 00:02:41.738381 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 00:02:41.747508 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 00:02:41.755968 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 00:02:41.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.756103 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 00:02:41.767770 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 00:02:41.772092 systemd[1]: Stopped target basic.target - Basic System. Jan 14 00:02:41.780311 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 00:02:41.788681 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 00:02:41.796824 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 00:02:41.805558 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 00:02:41.814479 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 00:02:41.823023 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 00:02:41.832204 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 00:02:41.840276 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 00:02:41.849059 systemd[1]: Stopped target swap.target - Swaps. Jan 14 00:02:41.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.856216 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 00:02:41.856331 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 00:02:41.867415 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 00:02:41.872183 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 00:02:41.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.880875 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 00:02:41.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.885058 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 00:02:41.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.890234 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 00:02:41.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.890339 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 00:02:41.903654 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 00:02:41.903752 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 00:02:41.909056 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 00:02:41.909127 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 00:02:41.972670 ignition[1485]: INFO : Ignition 2.24.0 Jan 14 00:02:41.972670 ignition[1485]: INFO : Stage: umount Jan 14 00:02:41.972670 ignition[1485]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 00:02:41.972670 ignition[1485]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 00:02:41.972670 ignition[1485]: INFO : umount: umount passed Jan 14 00:02:41.972670 ignition[1485]: INFO : Ignition finished successfully Jan 14 00:02:41.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.916745 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 00:02:42.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.916822 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 00:02:42.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.927797 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 00:02:42.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.952370 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 00:02:42.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.966908 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 00:02:41.967078 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 00:02:42.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:41.975907 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 00:02:41.976021 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 00:02:41.984380 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 00:02:41.984477 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 00:02:42.000733 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 00:02:42.000966 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 00:02:42.012111 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 00:02:42.012250 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 00:02:42.019986 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 00:02:42.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.020074 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 00:02:42.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.027843 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 00:02:42.027891 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 00:02:42.037152 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 00:02:42.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.037198 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 00:02:42.045463 systemd[1]: Stopped target network.target - Network. Jan 14 00:02:42.053357 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 00:02:42.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.053439 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 00:02:42.062897 systemd[1]: Stopped target paths.target - Path Units. Jan 14 00:02:42.205000 audit: BPF prog-id=9 op=UNLOAD Jan 14 00:02:42.208000 audit: BPF prog-id=6 op=UNLOAD Jan 14 00:02:42.078153 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 00:02:42.087184 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 00:02:42.093154 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 00:02:42.101223 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 00:02:42.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.111719 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 00:02:42.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.111774 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 00:02:42.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.119425 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 00:02:42.119455 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 00:02:42.127729 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 00:02:42.127749 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 00:02:42.135856 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 00:02:42.135909 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 00:02:42.144578 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 00:02:42.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.144620 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 00:02:42.152781 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 00:02:42.163320 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 00:02:42.170835 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 00:02:42.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.171377 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 00:02:42.358342 kernel: hv_netvsc 002248b9-d64a-0022-48b9-d64a002248b9 eth0: Data path switched from VF: enP49516s1 Jan 14 00:02:42.171462 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 00:02:42.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.188274 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 00:02:42.188358 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 00:02:42.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.207192 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 00:02:42.214659 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 00:02:42.214702 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 00:02:42.228292 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 00:02:42.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.236827 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 00:02:42.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.236904 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 00:02:42.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.246450 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 00:02:42.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.246503 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 00:02:42.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.254689 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 00:02:42.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.254735 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 00:02:42.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.267489 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 00:02:42.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.305908 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 00:02:42.306048 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 00:02:42.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:42.312050 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 00:02:42.312085 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 00:02:42.320476 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 00:02:42.320501 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 00:02:42.335253 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 00:02:42.335325 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 00:02:42.358157 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 00:02:42.358359 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 00:02:42.367406 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 00:02:42.367570 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 00:02:42.382108 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 00:02:42.401701 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 00:02:42.401793 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 00:02:42.407223 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 00:02:42.407276 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 00:02:42.419741 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 00:02:42.419804 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 00:02:42.428957 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 00:02:42.429004 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 00:02:42.439400 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 00:02:42.439442 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:02:42.448456 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 00:02:42.448558 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 00:02:42.456299 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 00:02:42.456383 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 00:02:42.465013 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 00:02:42.465087 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 00:02:42.475653 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 00:02:42.484477 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 00:02:42.484570 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 00:02:42.493381 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 00:02:42.516158 systemd[1]: Switching root. Jan 14 00:02:42.779176 systemd-journald[582]: Received SIGTERM from PID 1 (systemd). Jan 14 00:02:42.779237 systemd-journald[582]: Journal stopped Jan 14 00:02:46.782856 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 00:02:46.782877 kernel: SELinux: policy capability open_perms=1 Jan 14 00:02:46.782885 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 00:02:46.782891 kernel: SELinux: policy capability always_check_network=0 Jan 14 00:02:46.782898 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 00:02:46.782903 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 00:02:46.782910 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 00:02:46.782917 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 00:02:46.782922 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 00:02:46.782929 systemd[1]: Successfully loaded SELinux policy in 174.240ms. Jan 14 00:02:46.782938 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.709ms. Jan 14 00:02:46.782945 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 00:02:46.782951 systemd[1]: Detected virtualization microsoft. Jan 14 00:02:46.782957 systemd[1]: Detected architecture arm64. Jan 14 00:02:46.782965 systemd[1]: Detected first boot. Jan 14 00:02:46.782972 systemd[1]: Hostname set to . Jan 14 00:02:46.782979 systemd[1]: Initializing machine ID from random generator. Jan 14 00:02:46.782985 zram_generator::config[1528]: No configuration found. Jan 14 00:02:46.782992 kernel: NET: Registered PF_VSOCK protocol family Jan 14 00:02:46.782999 systemd[1]: Populated /etc with preset unit settings. Jan 14 00:02:46.783005 kernel: kauditd_printk_skb: 45 callbacks suppressed Jan 14 00:02:46.783011 kernel: audit: type=1334 audit(1768348965.789:96): prog-id=12 op=LOAD Jan 14 00:02:46.783017 kernel: audit: type=1334 audit(1768348965.792:97): prog-id=3 op=UNLOAD Jan 14 00:02:46.783023 kernel: audit: type=1334 audit(1768348965.794:98): prog-id=13 op=LOAD Jan 14 00:02:46.783030 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 00:02:46.783037 kernel: audit: type=1334 audit(1768348965.797:99): prog-id=14 op=LOAD Jan 14 00:02:46.783043 kernel: audit: type=1334 audit(1768348965.797:100): prog-id=4 op=UNLOAD Jan 14 00:02:46.783049 kernel: audit: type=1334 audit(1768348965.797:101): prog-id=5 op=UNLOAD Jan 14 00:02:46.783055 kernel: audit: type=1131 audit(1768348965.797:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.783062 kernel: audit: type=1334 audit(1768348965.832:103): prog-id=12 op=UNLOAD Jan 14 00:02:46.783068 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 00:02:46.783075 kernel: audit: type=1130 audit(1768348965.846:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.783082 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 00:02:46.783089 kernel: audit: type=1131 audit(1768348965.846:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.783095 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 00:02:46.783102 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 00:02:46.783109 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 00:02:46.783116 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 00:02:46.783123 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 00:02:46.783130 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 00:02:46.783138 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 00:02:46.783145 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 00:02:46.783151 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 00:02:46.783879 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 00:02:46.783908 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 00:02:46.783916 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 00:02:46.783924 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 00:02:46.783933 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 00:02:46.783940 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 14 00:02:46.783947 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 00:02:46.783958 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 00:02:46.783965 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 00:02:46.783972 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 00:02:46.783978 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 00:02:46.783985 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 00:02:46.783992 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 00:02:46.784001 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 00:02:46.784007 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 00:02:46.784014 systemd[1]: Reached target slices.target - Slice Units. Jan 14 00:02:46.784021 systemd[1]: Reached target swap.target - Swaps. Jan 14 00:02:46.784027 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 00:02:46.784035 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 00:02:46.784043 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 00:02:46.784050 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 00:02:46.784057 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 00:02:46.784063 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 00:02:46.784071 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 00:02:46.784078 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 00:02:46.784085 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 00:02:46.784093 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 00:02:46.784099 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 00:02:46.784106 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 00:02:46.784113 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 00:02:46.784121 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 00:02:46.784128 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 00:02:46.784134 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 00:02:46.784141 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 00:02:46.784148 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 00:02:46.784155 systemd[1]: Reached target machines.target - Containers. Jan 14 00:02:46.784177 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 00:02:46.784185 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 00:02:46.784192 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 00:02:46.784199 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 00:02:46.784206 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 00:02:46.784213 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 00:02:46.784220 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 00:02:46.784227 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 00:02:46.784234 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 00:02:46.784241 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 00:02:46.784249 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 00:02:46.784256 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 00:02:46.784262 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 00:02:46.784269 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 00:02:46.784277 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 00:02:46.784284 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 00:02:46.784293 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 00:02:46.784299 kernel: ACPI: bus type drm_connector registered Jan 14 00:02:46.784306 kernel: fuse: init (API version 7.41) Jan 14 00:02:46.784313 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 00:02:46.784320 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 00:02:46.784328 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 00:02:46.784359 systemd-journald[1611]: Collecting audit messages is enabled. Jan 14 00:02:46.784375 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 00:02:46.784384 systemd-journald[1611]: Journal started Jan 14 00:02:46.784399 systemd-journald[1611]: Runtime Journal (/run/log/journal/530bc6b1131a464584a159f00aea177f) is 8M, max 78.3M, 70.3M free. Jan 14 00:02:46.198000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 00:02:46.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.694000 audit: BPF prog-id=14 op=UNLOAD Jan 14 00:02:46.694000 audit: BPF prog-id=13 op=UNLOAD Jan 14 00:02:46.695000 audit: BPF prog-id=15 op=LOAD Jan 14 00:02:46.695000 audit: BPF prog-id=16 op=LOAD Jan 14 00:02:46.695000 audit: BPF prog-id=17 op=LOAD Jan 14 00:02:46.779000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 00:02:46.779000 audit[1611]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffe3a57890 a2=4000 a3=0 items=0 ppid=1 pid=1611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:02:46.779000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 00:02:45.782058 systemd[1]: Queued start job for default target multi-user.target. Jan 14 00:02:45.798284 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 14 00:02:45.798767 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 00:02:45.799099 systemd[1]: systemd-journald.service: Consumed 2.475s CPU time. Jan 14 00:02:46.803180 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 00:02:46.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.809560 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 00:02:46.814403 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 00:02:46.820238 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 00:02:46.825209 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 00:02:46.830371 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 00:02:46.835508 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 00:02:46.839914 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 00:02:46.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.846364 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 00:02:46.846630 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 00:02:46.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.853833 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 00:02:46.854056 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 00:02:46.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.859514 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 00:02:46.859738 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 00:02:46.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.864744 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 00:02:46.864934 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 00:02:46.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.873238 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 00:02:46.873442 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 00:02:46.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.878790 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 00:02:46.878994 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 00:02:46.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.884328 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 00:02:46.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.889635 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 00:02:46.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.896311 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 00:02:46.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.902829 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 00:02:46.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.909468 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 00:02:46.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:46.923025 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 00:02:46.928857 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 00:02:46.935436 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 00:02:46.951295 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 00:02:46.956377 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 00:02:46.956414 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 00:02:46.961612 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 00:02:46.967149 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 00:02:46.967263 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 00:02:46.982125 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 00:02:46.993895 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 00:02:46.999220 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 00:02:47.000114 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 00:02:47.004887 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 00:02:47.005716 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 00:02:47.011289 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 00:02:47.020301 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 00:02:47.030759 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 00:02:47.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:47.036834 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 00:02:47.044409 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 00:02:47.049994 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 00:02:47.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:47.057469 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 00:02:47.059749 systemd-journald[1611]: Time spent on flushing to /var/log/journal/530bc6b1131a464584a159f00aea177f is 11.908ms for 1084 entries. Jan 14 00:02:47.059749 systemd-journald[1611]: System Journal (/var/log/journal/530bc6b1131a464584a159f00aea177f) is 8M, max 2.2G, 2.2G free. Jan 14 00:02:47.114917 systemd-journald[1611]: Received client request to flush runtime journal. Jan 14 00:02:47.070352 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 00:02:47.116369 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 00:02:47.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:47.148189 kernel: loop1: detected capacity change from 0 to 100192 Jan 14 00:02:47.155433 systemd-tmpfiles[1668]: ACLs are not supported, ignoring. Jan 14 00:02:47.155446 systemd-tmpfiles[1668]: ACLs are not supported, ignoring. Jan 14 00:02:47.158531 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 00:02:47.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:47.166213 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 00:02:47.176203 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 00:02:47.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:47.182402 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 00:02:47.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:47.273294 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 00:02:47.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:47.279000 audit: BPF prog-id=18 op=LOAD Jan 14 00:02:47.279000 audit: BPF prog-id=19 op=LOAD Jan 14 00:02:47.279000 audit: BPF prog-id=20 op=LOAD Jan 14 00:02:47.282933 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 00:02:47.289000 audit: BPF prog-id=21 op=LOAD Jan 14 00:02:47.293339 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 00:02:47.300372 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 00:02:47.316000 audit: BPF prog-id=22 op=LOAD Jan 14 00:02:47.316000 audit: BPF prog-id=23 op=LOAD Jan 14 00:02:47.316000 audit: BPF prog-id=24 op=LOAD Jan 14 00:02:47.318054 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 00:02:47.322000 audit: BPF prog-id=25 op=LOAD Jan 14 00:02:47.322000 audit: BPF prog-id=26 op=LOAD Jan 14 00:02:47.322000 audit: BPF prog-id=27 op=LOAD Jan 14 00:02:47.326329 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 00:02:47.330148 systemd-tmpfiles[1688]: ACLs are not supported, ignoring. Jan 14 00:02:47.330773 systemd-tmpfiles[1688]: ACLs are not supported, ignoring. Jan 14 00:02:47.344304 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 00:02:47.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:47.365388 systemd-nsresourced[1690]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 00:02:47.367839 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 00:02:47.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:47.393018 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 00:02:47.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:47.465041 systemd-oomd[1686]: No swap; memory pressure usage will be degraded Jan 14 00:02:47.465519 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 00:02:47.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:47.484817 systemd-resolved[1687]: Positive Trust Anchors: Jan 14 00:02:47.484836 systemd-resolved[1687]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 00:02:47.484839 systemd-resolved[1687]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 00:02:47.484858 systemd-resolved[1687]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 00:02:47.530248 systemd-resolved[1687]: Using system hostname 'ci-4547.0.0-n-d5ef04779b'. Jan 14 00:02:47.531501 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 00:02:47.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:47.536754 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 00:02:47.563204 kernel: loop2: detected capacity change from 0 to 45344 Jan 14 00:02:47.645610 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 00:02:47.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:47.649000 audit: BPF prog-id=8 op=UNLOAD Jan 14 00:02:47.650000 audit: BPF prog-id=7 op=UNLOAD Jan 14 00:02:47.650000 audit: BPF prog-id=28 op=LOAD Jan 14 00:02:47.650000 audit: BPF prog-id=29 op=LOAD Jan 14 00:02:47.652705 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 00:02:47.678658 systemd-udevd[1709]: Using default interface naming scheme 'v257'. Jan 14 00:02:47.874199 kernel: loop3: detected capacity change from 0 to 27544 Jan 14 00:02:47.896304 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 00:02:47.903316 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 00:02:47.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:47.910000 audit: BPF prog-id=30 op=LOAD Jan 14 00:02:47.913001 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 00:02:47.994220 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 14 00:02:48.040472 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 14 00:02:48.040810 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 00:02:48.040831 kernel: hv_vmbus: registering driver hv_balloon Jan 14 00:02:48.054743 kernel: hv_vmbus: registering driver hyperv_fb Jan 14 00:02:48.054855 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 14 00:02:48.054880 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 14 00:02:48.061309 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 14 00:02:48.061402 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 14 00:02:48.072764 kernel: Console: switching to colour dummy device 80x25 Jan 14 00:02:48.080185 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 00:02:48.083763 systemd-networkd[1723]: lo: Link UP Jan 14 00:02:48.083770 systemd-networkd[1723]: lo: Gained carrier Jan 14 00:02:48.085227 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 00:02:48.085229 systemd-networkd[1723]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:02:48.085234 systemd-networkd[1723]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 00:02:48.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:48.094072 systemd[1]: Reached target network.target - Network. Jan 14 00:02:48.102547 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 00:02:48.114401 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 00:02:48.163186 kernel: mlx5_core c16c:00:02.0 enP49516s1: Link up Jan 14 00:02:48.172257 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:02:48.185224 kernel: hv_netvsc 002248b9-d64a-0022-48b9-d64a002248b9 eth0: Data path switched to VF: enP49516s1 Jan 14 00:02:48.187352 systemd-networkd[1723]: enP49516s1: Link UP Jan 14 00:02:48.187490 systemd-networkd[1723]: eth0: Link UP Jan 14 00:02:48.187493 systemd-networkd[1723]: eth0: Gained carrier Jan 14 00:02:48.187528 systemd-networkd[1723]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:02:48.196430 systemd-networkd[1723]: enP49516s1: Gained carrier Jan 14 00:02:48.196571 systemd-networkd[1723]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:02:48.199992 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 00:02:48.202410 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:02:48.204224 systemd-networkd[1723]: eth0: DHCPv4 address 10.200.20.29/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 14 00:02:48.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:48.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:48.210218 kernel: MACsec IEEE 802.1AE Jan 14 00:02:48.215419 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:02:48.224260 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 00:02:48.227181 kernel: loop4: detected capacity change from 0 to 207008 Jan 14 00:02:48.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:48.238365 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 00:02:48.238582 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:02:48.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:48.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:48.247484 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:02:48.281187 kernel: loop5: detected capacity change from 0 to 100192 Jan 14 00:02:48.297287 kernel: loop6: detected capacity change from 0 to 45344 Jan 14 00:02:48.316386 kernel: loop7: detected capacity change from 0 to 27544 Jan 14 00:02:48.343180 kernel: loop1: detected capacity change from 0 to 207008 Jan 14 00:02:48.361534 (sd-merge)[1792]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Jan 14 00:02:48.364895 (sd-merge)[1792]: Merged extensions into '/usr'. Jan 14 00:02:48.379313 systemd[1]: Reload requested from client PID 1667 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 00:02:48.379331 systemd[1]: Reloading... Jan 14 00:02:48.441396 zram_generator::config[1881]: No configuration found. Jan 14 00:02:48.618576 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 00:02:48.623812 systemd[1]: Reloading finished in 244 ms. Jan 14 00:02:48.645529 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 00:02:48.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:48.651501 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:02:48.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:48.675371 systemd[1]: Starting ensure-sysext.service... Jan 14 00:02:48.680311 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 00:02:48.691315 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 00:02:48.695000 audit: BPF prog-id=31 op=LOAD Jan 14 00:02:48.702000 audit: BPF prog-id=18 op=UNLOAD Jan 14 00:02:48.702000 audit: BPF prog-id=32 op=LOAD Jan 14 00:02:48.702000 audit: BPF prog-id=33 op=LOAD Jan 14 00:02:48.702000 audit: BPF prog-id=19 op=UNLOAD Jan 14 00:02:48.702000 audit: BPF prog-id=20 op=UNLOAD Jan 14 00:02:48.702000 audit: BPF prog-id=34 op=LOAD Jan 14 00:02:48.702000 audit: BPF prog-id=15 op=UNLOAD Jan 14 00:02:48.702000 audit: BPF prog-id=35 op=LOAD Jan 14 00:02:48.702000 audit: BPF prog-id=36 op=LOAD Jan 14 00:02:48.702000 audit: BPF prog-id=16 op=UNLOAD Jan 14 00:02:48.702000 audit: BPF prog-id=17 op=UNLOAD Jan 14 00:02:48.703000 audit: BPF prog-id=37 op=LOAD Jan 14 00:02:48.703000 audit: BPF prog-id=25 op=UNLOAD Jan 14 00:02:48.703000 audit: BPF prog-id=38 op=LOAD Jan 14 00:02:48.703000 audit: BPF prog-id=39 op=LOAD Jan 14 00:02:48.703000 audit: BPF prog-id=26 op=UNLOAD Jan 14 00:02:48.703000 audit: BPF prog-id=27 op=UNLOAD Jan 14 00:02:48.703000 audit: BPF prog-id=40 op=LOAD Jan 14 00:02:48.703000 audit: BPF prog-id=41 op=LOAD Jan 14 00:02:48.703000 audit: BPF prog-id=28 op=UNLOAD Jan 14 00:02:48.703000 audit: BPF prog-id=29 op=UNLOAD Jan 14 00:02:48.704000 audit: BPF prog-id=42 op=LOAD Jan 14 00:02:48.704000 audit: BPF prog-id=22 op=UNLOAD Jan 14 00:02:48.704000 audit: BPF prog-id=43 op=LOAD Jan 14 00:02:48.704000 audit: BPF prog-id=44 op=LOAD Jan 14 00:02:48.704000 audit: BPF prog-id=23 op=UNLOAD Jan 14 00:02:48.704000 audit: BPF prog-id=24 op=UNLOAD Jan 14 00:02:48.705000 audit: BPF prog-id=45 op=LOAD Jan 14 00:02:48.705000 audit: BPF prog-id=30 op=UNLOAD Jan 14 00:02:48.705000 audit: BPF prog-id=46 op=LOAD Jan 14 00:02:48.705000 audit: BPF prog-id=21 op=UNLOAD Jan 14 00:02:48.710323 systemd[1]: Reload requested from client PID 1934 ('systemctl') (unit ensure-sysext.service)... Jan 14 00:02:48.710336 systemd[1]: Reloading... Jan 14 00:02:48.714220 systemd-tmpfiles[1936]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 00:02:48.714514 systemd-tmpfiles[1936]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 00:02:48.714695 systemd-tmpfiles[1936]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 00:02:48.715820 systemd-tmpfiles[1936]: ACLs are not supported, ignoring. Jan 14 00:02:48.715996 systemd-tmpfiles[1936]: ACLs are not supported, ignoring. Jan 14 00:02:48.720765 systemd-tmpfiles[1936]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 00:02:48.720873 systemd-tmpfiles[1936]: Skipping /boot Jan 14 00:02:48.728670 systemd-tmpfiles[1936]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 00:02:48.728789 systemd-tmpfiles[1936]: Skipping /boot Jan 14 00:02:48.781455 zram_generator::config[1975]: No configuration found. Jan 14 00:02:48.930534 systemd[1]: Reloading finished in 219 ms. Jan 14 00:02:48.944000 audit: BPF prog-id=47 op=LOAD Jan 14 00:02:48.944000 audit: BPF prog-id=34 op=UNLOAD Jan 14 00:02:48.944000 audit: BPF prog-id=48 op=LOAD Jan 14 00:02:48.944000 audit: BPF prog-id=49 op=LOAD Jan 14 00:02:48.944000 audit: BPF prog-id=35 op=UNLOAD Jan 14 00:02:48.944000 audit: BPF prog-id=36 op=UNLOAD Jan 14 00:02:48.944000 audit: BPF prog-id=50 op=LOAD Jan 14 00:02:48.944000 audit: BPF prog-id=37 op=UNLOAD Jan 14 00:02:48.944000 audit: BPF prog-id=51 op=LOAD Jan 14 00:02:48.944000 audit: BPF prog-id=52 op=LOAD Jan 14 00:02:48.944000 audit: BPF prog-id=38 op=UNLOAD Jan 14 00:02:48.944000 audit: BPF prog-id=39 op=UNLOAD Jan 14 00:02:48.944000 audit: BPF prog-id=53 op=LOAD Jan 14 00:02:48.944000 audit: BPF prog-id=42 op=UNLOAD Jan 14 00:02:48.944000 audit: BPF prog-id=54 op=LOAD Jan 14 00:02:48.945000 audit: BPF prog-id=55 op=LOAD Jan 14 00:02:48.945000 audit: BPF prog-id=43 op=UNLOAD Jan 14 00:02:48.945000 audit: BPF prog-id=44 op=UNLOAD Jan 14 00:02:48.945000 audit: BPF prog-id=56 op=LOAD Jan 14 00:02:48.945000 audit: BPF prog-id=46 op=UNLOAD Jan 14 00:02:48.945000 audit: BPF prog-id=57 op=LOAD Jan 14 00:02:48.945000 audit: BPF prog-id=45 op=UNLOAD Jan 14 00:02:48.946000 audit: BPF prog-id=58 op=LOAD Jan 14 00:02:48.946000 audit: BPF prog-id=59 op=LOAD Jan 14 00:02:48.946000 audit: BPF prog-id=40 op=UNLOAD Jan 14 00:02:48.946000 audit: BPF prog-id=41 op=UNLOAD Jan 14 00:02:48.947000 audit: BPF prog-id=60 op=LOAD Jan 14 00:02:48.951000 audit: BPF prog-id=31 op=UNLOAD Jan 14 00:02:48.951000 audit: BPF prog-id=61 op=LOAD Jan 14 00:02:48.951000 audit: BPF prog-id=62 op=LOAD Jan 14 00:02:48.951000 audit: BPF prog-id=32 op=UNLOAD Jan 14 00:02:48.951000 audit: BPF prog-id=33 op=UNLOAD Jan 14 00:02:48.954624 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 00:02:48.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:48.961121 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 00:02:48.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:48.975480 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 00:02:49.004260 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 00:02:49.017795 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 00:02:49.025395 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 00:02:49.035566 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 00:02:49.044000 audit[2040]: SYSTEM_BOOT pid=2040 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 00:02:49.052864 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 00:02:49.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:49.059767 systemd[1]: Finished ensure-sysext.service. Jan 14 00:02:49.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:49.068289 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 00:02:49.070315 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 00:02:49.083606 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 00:02:49.093008 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 00:02:49.102297 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 00:02:49.106608 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 00:02:49.106705 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 00:02:49.106737 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 00:02:49.106778 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 00:02:49.111541 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 00:02:49.111751 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 00:02:49.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:49.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:49.118389 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 00:02:49.120256 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 00:02:49.125819 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 00:02:49.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:49.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:49.126014 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 00:02:49.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:49.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:49.134064 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 00:02:49.134846 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 00:02:49.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:49.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:02:49.141000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 00:02:49.141000 audit[2060]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd5bec1c0 a2=420 a3=0 items=0 ppid=2030 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:02:49.141000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 00:02:49.142757 augenrules[2060]: No rules Jan 14 00:02:49.143991 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 00:02:49.144061 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 00:02:49.144537 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 00:02:49.144793 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 00:02:49.152696 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 00:02:49.721922 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 00:02:49.727503 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 00:02:50.183287 systemd-networkd[1723]: eth0: Gained IPv6LL Jan 14 00:02:50.187615 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 00:02:50.193807 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 00:02:54.227398 ldconfig[2033]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 00:02:54.237631 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 00:02:54.244934 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 00:02:54.275255 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 00:02:54.280274 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 00:02:54.284905 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 00:02:54.289899 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 00:02:54.295234 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 00:02:54.300055 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 00:02:54.305310 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 00:02:54.310532 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 00:02:54.315239 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 00:02:54.320358 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 00:02:54.320383 systemd[1]: Reached target paths.target - Path Units. Jan 14 00:02:54.324074 systemd[1]: Reached target timers.target - Timer Units. Jan 14 00:02:54.358218 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 00:02:54.364561 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 00:02:54.369994 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 00:02:54.375400 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 00:02:54.380781 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 00:02:54.393934 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 00:02:54.398609 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 00:02:54.404084 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 00:02:54.409081 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 00:02:54.413031 systemd[1]: Reached target basic.target - Basic System. Jan 14 00:02:54.416897 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 00:02:54.416920 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 00:02:54.419228 systemd[1]: Starting chronyd.service - NTP client/server... Jan 14 00:02:54.431277 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 00:02:54.439290 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 00:02:54.448391 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 00:02:54.456901 chronyd[2078]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 14 00:02:54.457315 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 00:02:54.464359 chronyd[2078]: Timezone right/UTC failed leap second check, ignoring Jan 14 00:02:54.464508 chronyd[2078]: Loaded seccomp filter (level 2) Jan 14 00:02:54.465594 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 00:02:54.483181 jq[2086]: false Jan 14 00:02:54.481878 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 00:02:54.487449 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 00:02:54.488555 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 14 00:02:54.496495 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 14 00:02:54.510512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:02:54.516704 KVP[2088]: KVP starting; pid is:2088 Jan 14 00:02:54.519544 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 00:02:54.526077 KVP[2088]: KVP LIC Version: 3.1 Jan 14 00:02:54.526186 kernel: hv_utils: KVP IC version 4.0 Jan 14 00:02:54.528565 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 00:02:54.540110 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 00:02:54.548091 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 00:02:54.563481 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 00:02:54.572309 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 00:02:54.578896 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 00:02:54.579349 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 00:02:54.579899 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 00:02:54.587000 extend-filesystems[2087]: Found /dev/sda6 Jan 14 00:02:54.592355 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 00:02:54.598405 systemd[1]: Started chronyd.service - NTP client/server. Jan 14 00:02:54.606560 extend-filesystems[2087]: Found /dev/sda9 Jan 14 00:02:54.611948 jq[2115]: true Jan 14 00:02:54.611211 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 00:02:54.617447 extend-filesystems[2087]: Checking size of /dev/sda9 Jan 14 00:02:54.621854 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 00:02:54.622059 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 00:02:54.625453 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 00:02:54.626222 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 00:02:54.636994 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 00:02:54.647982 update_engine[2111]: I20260114 00:02:54.647894 2111 main.cc:92] Flatcar Update Engine starting Jan 14 00:02:54.648522 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 00:02:54.648739 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 00:02:54.654217 extend-filesystems[2087]: Resized partition /dev/sda9 Jan 14 00:02:54.680954 jq[2132]: true Jan 14 00:02:54.689690 extend-filesystems[2144]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 00:02:54.711066 kernel: EXT4-fs (sda9): resizing filesystem from 6359552 to 6376955 blocks Jan 14 00:02:54.711174 kernel: EXT4-fs (sda9): resized filesystem to 6376955 Jan 14 00:02:54.730914 systemd-logind[2105]: New seat seat0. Jan 14 00:02:54.737019 systemd-logind[2105]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 14 00:02:54.738358 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 00:02:54.747073 extend-filesystems[2144]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 14 00:02:54.747073 extend-filesystems[2144]: old_desc_blocks = 4, new_desc_blocks = 4 Jan 14 00:02:54.747073 extend-filesystems[2144]: The filesystem on /dev/sda9 is now 6376955 (4k) blocks long. Jan 14 00:02:54.854414 extend-filesystems[2087]: Resized filesystem in /dev/sda9 Jan 14 00:02:54.890908 update_engine[2111]: I20260114 00:02:54.813295 2111 update_check_scheduler.cc:74] Next update check in 5m12s Jan 14 00:02:54.747842 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 00:02:54.891013 tar[2129]: linux-arm64/LICENSE Jan 14 00:02:54.891013 tar[2129]: linux-arm64/helm Jan 14 00:02:54.891282 bash[2166]: Updated "/home/core/.ssh/authorized_keys" Jan 14 00:02:54.801596 dbus-daemon[2081]: [system] SELinux support is enabled Jan 14 00:02:54.891472 coreos-metadata[2080]: Jan 14 00:02:54.888 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 00:02:54.749491 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 00:02:54.847409 dbus-daemon[2081]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 14 00:02:54.801964 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 00:02:54.839089 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 00:02:54.847299 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 00:02:54.847396 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 00:02:54.847419 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 00:02:54.853345 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 00:02:54.853363 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 00:02:54.873977 systemd[1]: Started update-engine.service - Update Engine. Jan 14 00:02:54.904033 coreos-metadata[2080]: Jan 14 00:02:54.900 INFO Fetch successful Jan 14 00:02:54.904033 coreos-metadata[2080]: Jan 14 00:02:54.900 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 14 00:02:54.908736 coreos-metadata[2080]: Jan 14 00:02:54.908 INFO Fetch successful Jan 14 00:02:54.908927 coreos-metadata[2080]: Jan 14 00:02:54.908 INFO Fetching http://168.63.129.16/machine/afa4b309-5282-492e-a4cf-4d317cd77867/cb845009%2D6a46%2D4751%2Dafe3%2D0cde4737c9bd.%5Fci%2D4547.0.0%2Dn%2Dd5ef04779b?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 14 00:02:54.911315 coreos-metadata[2080]: Jan 14 00:02:54.911 INFO Fetch successful Jan 14 00:02:54.911315 coreos-metadata[2080]: Jan 14 00:02:54.911 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 14 00:02:54.911775 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 00:02:54.921123 coreos-metadata[2080]: Jan 14 00:02:54.921 INFO Fetch successful Jan 14 00:02:55.003789 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 00:02:55.008916 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 00:02:55.188493 locksmithd[2219]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 00:02:55.283354 containerd[2133]: time="2026-01-14T00:02:55Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 00:02:55.285885 containerd[2133]: time="2026-01-14T00:02:55.285834608Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 00:02:55.300250 containerd[2133]: time="2026-01-14T00:02:55.299143320Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.456µs" Jan 14 00:02:55.300250 containerd[2133]: time="2026-01-14T00:02:55.300243816Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 00:02:55.300369 containerd[2133]: time="2026-01-14T00:02:55.300295432Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 00:02:55.300369 containerd[2133]: time="2026-01-14T00:02:55.300306144Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 00:02:55.300740 containerd[2133]: time="2026-01-14T00:02:55.300716552Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 00:02:55.300774 containerd[2133]: time="2026-01-14T00:02:55.300743352Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 00:02:55.300845 containerd[2133]: time="2026-01-14T00:02:55.300808256Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 00:02:55.300845 containerd[2133]: time="2026-01-14T00:02:55.300822168Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 00:02:55.301036 containerd[2133]: time="2026-01-14T00:02:55.301017144Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 00:02:55.301065 containerd[2133]: time="2026-01-14T00:02:55.301037784Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 00:02:55.301065 containerd[2133]: time="2026-01-14T00:02:55.301046472Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 00:02:55.301065 containerd[2133]: time="2026-01-14T00:02:55.301057040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 00:02:55.302216 containerd[2133]: time="2026-01-14T00:02:55.302191672Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 00:02:55.302216 containerd[2133]: time="2026-01-14T00:02:55.302213504Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 00:02:55.302352 containerd[2133]: time="2026-01-14T00:02:55.302317904Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 00:02:55.302483 containerd[2133]: time="2026-01-14T00:02:55.302467664Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 00:02:55.302514 containerd[2133]: time="2026-01-14T00:02:55.302490640Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 00:02:55.302514 containerd[2133]: time="2026-01-14T00:02:55.302498528Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 00:02:55.302542 containerd[2133]: time="2026-01-14T00:02:55.302519968Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 00:02:55.302741 containerd[2133]: time="2026-01-14T00:02:55.302664472Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 00:02:55.302741 containerd[2133]: time="2026-01-14T00:02:55.302714352Z" level=info msg="metadata content store policy set" policy=shared Jan 14 00:02:55.317168 containerd[2133]: time="2026-01-14T00:02:55.316579552Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 00:02:55.317168 containerd[2133]: time="2026-01-14T00:02:55.316649072Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 00:02:55.317341 containerd[2133]: time="2026-01-14T00:02:55.317316128Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 00:02:55.317341 containerd[2133]: time="2026-01-14T00:02:55.317337472Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 00:02:55.317375 containerd[2133]: time="2026-01-14T00:02:55.317349032Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 00:02:55.317375 containerd[2133]: time="2026-01-14T00:02:55.317368160Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 00:02:55.317402 containerd[2133]: time="2026-01-14T00:02:55.317376616Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 00:02:55.317402 containerd[2133]: time="2026-01-14T00:02:55.317383872Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 00:02:55.317402 containerd[2133]: time="2026-01-14T00:02:55.317394784Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 00:02:55.317440 containerd[2133]: time="2026-01-14T00:02:55.317405896Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 00:02:55.317440 containerd[2133]: time="2026-01-14T00:02:55.317413704Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 00:02:55.317440 containerd[2133]: time="2026-01-14T00:02:55.317420392Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 00:02:55.317440 containerd[2133]: time="2026-01-14T00:02:55.317427800Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 00:02:55.317440 containerd[2133]: time="2026-01-14T00:02:55.317438264Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 00:02:55.317584 containerd[2133]: time="2026-01-14T00:02:55.317569016Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 00:02:55.317603 containerd[2133]: time="2026-01-14T00:02:55.317588264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 00:02:55.317603 containerd[2133]: time="2026-01-14T00:02:55.317598232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 00:02:55.317637 containerd[2133]: time="2026-01-14T00:02:55.317606128Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 00:02:55.317637 containerd[2133]: time="2026-01-14T00:02:55.317613280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 00:02:55.317637 containerd[2133]: time="2026-01-14T00:02:55.317619872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 00:02:55.317637 containerd[2133]: time="2026-01-14T00:02:55.317627112Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 00:02:55.317682 containerd[2133]: time="2026-01-14T00:02:55.317638328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 00:02:55.317682 containerd[2133]: time="2026-01-14T00:02:55.317645448Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 00:02:55.317682 containerd[2133]: time="2026-01-14T00:02:55.317652440Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 00:02:55.317682 containerd[2133]: time="2026-01-14T00:02:55.317658344Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 00:02:55.317682 containerd[2133]: time="2026-01-14T00:02:55.317677984Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 00:02:55.317741 containerd[2133]: time="2026-01-14T00:02:55.317711064Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 00:02:55.317741 containerd[2133]: time="2026-01-14T00:02:55.317720768Z" level=info msg="Start snapshots syncer" Jan 14 00:02:55.318275 containerd[2133]: time="2026-01-14T00:02:55.318252064Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 00:02:55.319687 containerd[2133]: time="2026-01-14T00:02:55.319648416Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 00:02:55.319768 containerd[2133]: time="2026-01-14T00:02:55.319705984Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 00:02:55.322101 containerd[2133]: time="2026-01-14T00:02:55.319807392Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 00:02:55.322101 containerd[2133]: time="2026-01-14T00:02:55.320270200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 00:02:55.322101 containerd[2133]: time="2026-01-14T00:02:55.320294816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 00:02:55.322101 containerd[2133]: time="2026-01-14T00:02:55.320303496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 00:02:55.322101 containerd[2133]: time="2026-01-14T00:02:55.320310416Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 00:02:55.322101 containerd[2133]: time="2026-01-14T00:02:55.320318568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 00:02:55.322101 containerd[2133]: time="2026-01-14T00:02:55.320327120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 00:02:55.322101 containerd[2133]: time="2026-01-14T00:02:55.320334408Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 00:02:55.322101 containerd[2133]: time="2026-01-14T00:02:55.320340544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 00:02:55.322101 containerd[2133]: time="2026-01-14T00:02:55.320347712Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 00:02:55.322101 containerd[2133]: time="2026-01-14T00:02:55.321097144Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 00:02:55.322101 containerd[2133]: time="2026-01-14T00:02:55.321177392Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 00:02:55.322101 containerd[2133]: time="2026-01-14T00:02:55.321185976Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 00:02:55.322277 containerd[2133]: time="2026-01-14T00:02:55.321193944Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 00:02:55.322277 containerd[2133]: time="2026-01-14T00:02:55.321199056Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 00:02:55.322277 containerd[2133]: time="2026-01-14T00:02:55.321207280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 00:02:55.322277 containerd[2133]: time="2026-01-14T00:02:55.321215328Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 00:02:55.322277 containerd[2133]: time="2026-01-14T00:02:55.321228952Z" level=info msg="runtime interface created" Jan 14 00:02:55.322277 containerd[2133]: time="2026-01-14T00:02:55.321232248Z" level=info msg="created NRI interface" Jan 14 00:02:55.322277 containerd[2133]: time="2026-01-14T00:02:55.321238328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 00:02:55.322277 containerd[2133]: time="2026-01-14T00:02:55.321247320Z" level=info msg="Connect containerd service" Jan 14 00:02:55.322277 containerd[2133]: time="2026-01-14T00:02:55.321271264Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 00:02:55.324556 containerd[2133]: time="2026-01-14T00:02:55.324530912Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 00:02:55.386672 tar[2129]: linux-arm64/README.md Jan 14 00:02:55.407031 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 00:02:55.418941 sshd_keygen[2109]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 00:02:55.437612 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 00:02:55.444664 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 00:02:55.452400 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 14 00:02:55.472288 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 00:02:55.472556 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 00:02:55.483428 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 00:02:55.494926 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 14 00:02:55.505760 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 00:02:55.514727 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 00:02:55.523860 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 14 00:02:55.530186 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 00:02:55.653756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:02:55.659241 (kubelet)[2297]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:02:55.671066 containerd[2133]: time="2026-01-14T00:02:55.671014888Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 00:02:55.671066 containerd[2133]: time="2026-01-14T00:02:55.671079416Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 00:02:55.671812 containerd[2133]: time="2026-01-14T00:02:55.671099512Z" level=info msg="Start subscribing containerd event" Jan 14 00:02:55.671812 containerd[2133]: time="2026-01-14T00:02:55.671129784Z" level=info msg="Start recovering state" Jan 14 00:02:55.671812 containerd[2133]: time="2026-01-14T00:02:55.671220576Z" level=info msg="Start event monitor" Jan 14 00:02:55.671812 containerd[2133]: time="2026-01-14T00:02:55.671232992Z" level=info msg="Start cni network conf syncer for default" Jan 14 00:02:55.671812 containerd[2133]: time="2026-01-14T00:02:55.671239392Z" level=info msg="Start streaming server" Jan 14 00:02:55.671812 containerd[2133]: time="2026-01-14T00:02:55.671245376Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 00:02:55.671812 containerd[2133]: time="2026-01-14T00:02:55.671250320Z" level=info msg="runtime interface starting up..." Jan 14 00:02:55.671812 containerd[2133]: time="2026-01-14T00:02:55.671254064Z" level=info msg="starting plugins..." Jan 14 00:02:55.671812 containerd[2133]: time="2026-01-14T00:02:55.671264320Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 00:02:55.671812 containerd[2133]: time="2026-01-14T00:02:55.671360544Z" level=info msg="containerd successfully booted in 0.389096s" Jan 14 00:02:55.672369 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 00:02:55.678354 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 00:02:55.687542 systemd[1]: Startup finished in 2.851s (kernel) + 11.277s (initrd) + 12.398s (userspace) = 26.527s. Jan 14 00:02:55.997442 kubelet[2297]: E0114 00:02:55.997334 2297 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:02:55.999628 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:02:55.999748 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:02:56.000371 systemd[1]: kubelet.service: Consumed 556ms CPU time, 254.6M memory peak. Jan 14 00:02:56.274701 login[2287]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:02:56.274702 login[2286]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:02:56.284031 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 00:02:56.284980 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 00:02:56.288810 systemd-logind[2105]: New session 2 of user core. Jan 14 00:02:56.291951 systemd-logind[2105]: New session 1 of user core. Jan 14 00:02:56.302391 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 00:02:56.307438 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 00:02:56.316407 (systemd)[2312]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:02:56.318918 systemd-logind[2105]: New session 3 of user core. Jan 14 00:02:56.434675 systemd[2312]: Queued start job for default target default.target. Jan 14 00:02:56.441472 systemd[2312]: Created slice app.slice - User Application Slice. Jan 14 00:02:56.441652 systemd[2312]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 00:02:56.441719 systemd[2312]: Reached target paths.target - Paths. Jan 14 00:02:56.441825 systemd[2312]: Reached target timers.target - Timers. Jan 14 00:02:56.443368 systemd[2312]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 00:02:56.446345 systemd[2312]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 00:02:56.452286 systemd[2312]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 00:02:56.452616 systemd[2312]: Reached target sockets.target - Sockets. Jan 14 00:02:56.456210 systemd[2312]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 00:02:56.456284 systemd[2312]: Reached target basic.target - Basic System. Jan 14 00:02:56.456327 systemd[2312]: Reached target default.target - Main User Target. Jan 14 00:02:56.456348 systemd[2312]: Startup finished in 132ms. Jan 14 00:02:56.456826 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 00:02:56.470937 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 00:02:56.471676 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 14 00:02:57.083078 waagent[2284]: 2026-01-14T00:02:57.082995Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 14 00:02:57.087639 waagent[2284]: 2026-01-14T00:02:57.087590Z INFO Daemon Daemon OS: flatcar 4547.0.0 Jan 14 00:02:57.091095 waagent[2284]: 2026-01-14T00:02:57.091062Z INFO Daemon Daemon Python: 3.11.13 Jan 14 00:02:57.094633 waagent[2284]: 2026-01-14T00:02:57.094572Z INFO Daemon Daemon Run daemon Jan 14 00:02:57.097686 waagent[2284]: 2026-01-14T00:02:57.097654Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4547.0.0' Jan 14 00:02:57.105151 waagent[2284]: 2026-01-14T00:02:57.104953Z INFO Daemon Daemon Using waagent for provisioning Jan 14 00:02:57.109045 waagent[2284]: 2026-01-14T00:02:57.109005Z INFO Daemon Daemon Activate resource disk Jan 14 00:02:57.112586 waagent[2284]: 2026-01-14T00:02:57.112554Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 14 00:02:57.120916 waagent[2284]: 2026-01-14T00:02:57.120873Z INFO Daemon Daemon Found device: None Jan 14 00:02:57.124285 waagent[2284]: 2026-01-14T00:02:57.124250Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 14 00:02:57.130537 waagent[2284]: 2026-01-14T00:02:57.130510Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 14 00:02:57.139493 waagent[2284]: 2026-01-14T00:02:57.139450Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 00:02:57.143852 waagent[2284]: 2026-01-14T00:02:57.143822Z INFO Daemon Daemon Running default provisioning handler Jan 14 00:02:57.153278 waagent[2284]: 2026-01-14T00:02:57.153236Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 14 00:02:57.163774 waagent[2284]: 2026-01-14T00:02:57.163727Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 14 00:02:57.170707 waagent[2284]: 2026-01-14T00:02:57.170672Z INFO Daemon Daemon cloud-init is enabled: False Jan 14 00:02:57.174307 waagent[2284]: 2026-01-14T00:02:57.174281Z INFO Daemon Daemon Copying ovf-env.xml Jan 14 00:02:57.270187 waagent[2284]: 2026-01-14T00:02:57.269311Z INFO Daemon Daemon Successfully mounted dvd Jan 14 00:02:57.296115 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 14 00:02:57.297335 waagent[2284]: 2026-01-14T00:02:57.297282Z INFO Daemon Daemon Detect protocol endpoint Jan 14 00:02:57.305613 waagent[2284]: 2026-01-14T00:02:57.301303Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 00:02:57.305940 waagent[2284]: 2026-01-14T00:02:57.305890Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 14 00:02:57.310979 waagent[2284]: 2026-01-14T00:02:57.310938Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 14 00:02:57.315046 waagent[2284]: 2026-01-14T00:02:57.315010Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 14 00:02:57.318801 waagent[2284]: 2026-01-14T00:02:57.318770Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 14 00:02:57.331573 waagent[2284]: 2026-01-14T00:02:57.331531Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 14 00:02:57.336870 waagent[2284]: 2026-01-14T00:02:57.336766Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 14 00:02:57.341073 waagent[2284]: 2026-01-14T00:02:57.341018Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 14 00:02:57.420275 waagent[2284]: 2026-01-14T00:02:57.420187Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 14 00:02:57.425106 waagent[2284]: 2026-01-14T00:02:57.425059Z INFO Daemon Daemon Forcing an update of the goal state. Jan 14 00:02:57.432830 waagent[2284]: 2026-01-14T00:02:57.432786Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 00:02:57.452037 waagent[2284]: 2026-01-14T00:02:57.452000Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 14 00:02:57.456575 waagent[2284]: 2026-01-14T00:02:57.456540Z INFO Daemon Jan 14 00:02:57.458714 waagent[2284]: 2026-01-14T00:02:57.458685Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 5f0687be-94c3-4d7e-831a-384110ef168d eTag: 4552839174404205730 source: Fabric] Jan 14 00:02:57.467320 waagent[2284]: 2026-01-14T00:02:57.467288Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 14 00:02:57.472242 waagent[2284]: 2026-01-14T00:02:57.472212Z INFO Daemon Jan 14 00:02:57.474334 waagent[2284]: 2026-01-14T00:02:57.474306Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 14 00:02:57.483722 waagent[2284]: 2026-01-14T00:02:57.483693Z INFO Daemon Daemon Downloading artifacts profile blob Jan 14 00:02:57.601964 waagent[2284]: 2026-01-14T00:02:57.601831Z INFO Daemon Downloaded certificate {'thumbprint': 'B6CE785623268A1E4331BDB25F002F9F2F206734', 'hasPrivateKey': True} Jan 14 00:02:57.609634 waagent[2284]: 2026-01-14T00:02:57.609586Z INFO Daemon Fetch goal state completed Jan 14 00:02:57.648176 waagent[2284]: 2026-01-14T00:02:57.648085Z INFO Daemon Daemon Starting provisioning Jan 14 00:02:57.652070 waagent[2284]: 2026-01-14T00:02:57.652024Z INFO Daemon Daemon Handle ovf-env.xml. Jan 14 00:02:57.655712 waagent[2284]: 2026-01-14T00:02:57.655683Z INFO Daemon Daemon Set hostname [ci-4547.0.0-n-d5ef04779b] Jan 14 00:02:57.661899 waagent[2284]: 2026-01-14T00:02:57.661850Z INFO Daemon Daemon Publish hostname [ci-4547.0.0-n-d5ef04779b] Jan 14 00:02:57.666830 waagent[2284]: 2026-01-14T00:02:57.666791Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 14 00:02:57.671728 waagent[2284]: 2026-01-14T00:02:57.671682Z INFO Daemon Daemon Primary interface is [eth0] Jan 14 00:02:57.682245 systemd-networkd[1723]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:02:57.682252 systemd-networkd[1723]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Jan 14 00:02:57.682337 systemd-networkd[1723]: eth0: DHCP lease lost Jan 14 00:02:57.692961 waagent[2284]: 2026-01-14T00:02:57.692894Z INFO Daemon Daemon Create user account if not exists Jan 14 00:02:57.697240 waagent[2284]: 2026-01-14T00:02:57.697195Z INFO Daemon Daemon User core already exists, skip useradd Jan 14 00:02:57.701992 waagent[2284]: 2026-01-14T00:02:57.701954Z INFO Daemon Daemon Configure sudoer Jan 14 00:02:57.707230 systemd-networkd[1723]: eth0: DHCPv4 address 10.200.20.29/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 14 00:02:57.711407 waagent[2284]: 2026-01-14T00:02:57.711345Z INFO Daemon Daemon Configure sshd Jan 14 00:02:57.718705 waagent[2284]: 2026-01-14T00:02:57.718649Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 14 00:02:57.728510 waagent[2284]: 2026-01-14T00:02:57.728437Z INFO Daemon Daemon Deploy ssh public key. Jan 14 00:02:58.823941 waagent[2284]: 2026-01-14T00:02:58.823893Z INFO Daemon Daemon Provisioning complete Jan 14 00:02:58.838101 waagent[2284]: 2026-01-14T00:02:58.838051Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 14 00:02:58.843253 waagent[2284]: 2026-01-14T00:02:58.843206Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 14 00:02:58.850758 waagent[2284]: 2026-01-14T00:02:58.850718Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 14 00:02:58.949200 waagent[2365]: 2026-01-14T00:02:58.948601Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 14 00:02:58.949200 waagent[2365]: 2026-01-14T00:02:58.948738Z INFO ExtHandler ExtHandler OS: flatcar 4547.0.0 Jan 14 00:02:58.949200 waagent[2365]: 2026-01-14T00:02:58.948778Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 14 00:02:58.949200 waagent[2365]: 2026-01-14T00:02:58.948815Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jan 14 00:02:58.987255 waagent[2365]: 2026-01-14T00:02:58.987188Z INFO ExtHandler ExtHandler Distro: flatcar-4547.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 14 00:02:58.987570 waagent[2365]: 2026-01-14T00:02:58.987541Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 00:02:58.987713 waagent[2365]: 2026-01-14T00:02:58.987690Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 00:02:58.993738 waagent[2365]: 2026-01-14T00:02:58.993693Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 00:02:58.999209 waagent[2365]: 2026-01-14T00:02:58.998672Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 14 00:02:58.999209 waagent[2365]: 2026-01-14T00:02:58.999058Z INFO ExtHandler Jan 14 00:02:58.999209 waagent[2365]: 2026-01-14T00:02:58.999109Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 5c5606e8-4ad3-46bd-a33f-9db815d3c877 eTag: 4552839174404205730 source: Fabric] Jan 14 00:02:58.999377 waagent[2365]: 2026-01-14T00:02:58.999347Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 14 00:02:58.999783 waagent[2365]: 2026-01-14T00:02:58.999755Z INFO ExtHandler Jan 14 00:02:58.999823 waagent[2365]: 2026-01-14T00:02:58.999805Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 14 00:02:59.005914 waagent[2365]: 2026-01-14T00:02:59.005868Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 14 00:02:59.076921 waagent[2365]: 2026-01-14T00:02:59.076779Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B6CE785623268A1E4331BDB25F002F9F2F206734', 'hasPrivateKey': True} Jan 14 00:02:59.077346 waagent[2365]: 2026-01-14T00:02:59.077311Z INFO ExtHandler Fetch goal state completed Jan 14 00:02:59.090306 waagent[2365]: 2026-01-14T00:02:59.090255Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.4 30 Sep 2025 (Library: OpenSSL 3.5.4 30 Sep 2025) Jan 14 00:02:59.094090 waagent[2365]: 2026-01-14T00:02:59.094036Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2365 Jan 14 00:02:59.094237 waagent[2365]: 2026-01-14T00:02:59.094207Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 14 00:02:59.094499 waagent[2365]: 2026-01-14T00:02:59.094470Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 14 00:02:59.095622 waagent[2365]: 2026-01-14T00:02:59.095587Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4547.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 14 00:02:59.095938 waagent[2365]: 2026-01-14T00:02:59.095908Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4547.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 14 00:02:59.096052 waagent[2365]: 2026-01-14T00:02:59.096030Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 14 00:02:59.096500 waagent[2365]: 2026-01-14T00:02:59.096469Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 14 00:02:59.159366 waagent[2365]: 2026-01-14T00:02:59.159324Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 14 00:02:59.159553 waagent[2365]: 2026-01-14T00:02:59.159524Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 14 00:02:59.164654 waagent[2365]: 2026-01-14T00:02:59.164621Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 14 00:02:59.169470 systemd[1]: Reload requested from client PID 2380 ('systemctl') (unit waagent.service)... Jan 14 00:02:59.169684 systemd[1]: Reloading... Jan 14 00:02:59.253224 zram_generator::config[2422]: No configuration found. Jan 14 00:02:59.412458 systemd[1]: Reloading finished in 242 ms. Jan 14 00:02:59.428973 waagent[2365]: 2026-01-14T00:02:59.428329Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 14 00:02:59.428973 waagent[2365]: 2026-01-14T00:02:59.428466Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 14 00:02:59.631287 waagent[2365]: 2026-01-14T00:02:59.631214Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 14 00:02:59.631554 waagent[2365]: 2026-01-14T00:02:59.631522Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 14 00:02:59.632168 waagent[2365]: 2026-01-14T00:02:59.632118Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 14 00:02:59.632469 waagent[2365]: 2026-01-14T00:02:59.632396Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 14 00:02:59.633211 waagent[2365]: 2026-01-14T00:02:59.632631Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 00:02:59.633211 waagent[2365]: 2026-01-14T00:02:59.632697Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 00:02:59.633211 waagent[2365]: 2026-01-14T00:02:59.632852Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 14 00:02:59.633211 waagent[2365]: 2026-01-14T00:02:59.632979Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 14 00:02:59.633211 waagent[2365]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 14 00:02:59.633211 waagent[2365]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 14 00:02:59.633211 waagent[2365]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 14 00:02:59.633211 waagent[2365]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 14 00:02:59.633211 waagent[2365]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 00:02:59.633211 waagent[2365]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 00:02:59.633487 waagent[2365]: 2026-01-14T00:02:59.633450Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 14 00:02:59.633537 waagent[2365]: 2026-01-14T00:02:59.633497Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 14 00:02:59.633887 waagent[2365]: 2026-01-14T00:02:59.633863Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 00:02:59.634041 waagent[2365]: 2026-01-14T00:02:59.634009Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 14 00:02:59.634087 waagent[2365]: 2026-01-14T00:02:59.634051Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 14 00:02:59.634364 waagent[2365]: 2026-01-14T00:02:59.634337Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 00:02:59.634673 waagent[2365]: 2026-01-14T00:02:59.634651Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 14 00:02:59.634837 waagent[2365]: 2026-01-14T00:02:59.634808Z INFO EnvHandler ExtHandler Configure routes Jan 14 00:02:59.635344 waagent[2365]: 2026-01-14T00:02:59.635327Z INFO EnvHandler ExtHandler Gateway:None Jan 14 00:02:59.635468 waagent[2365]: 2026-01-14T00:02:59.635447Z INFO EnvHandler ExtHandler Routes:None Jan 14 00:02:59.640243 waagent[2365]: 2026-01-14T00:02:59.640150Z INFO ExtHandler ExtHandler Jan 14 00:02:59.640306 waagent[2365]: 2026-01-14T00:02:59.640275Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 47e6feb7-5a53-4456-859e-c5c6669ddc67 correlation 152f72e6-b35b-458c-a84b-dffbe7c65364 created: 2026-01-14T00:02:07.354637Z] Jan 14 00:02:59.640586 waagent[2365]: 2026-01-14T00:02:59.640548Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 14 00:02:59.640971 waagent[2365]: 2026-01-14T00:02:59.640944Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jan 14 00:02:59.665627 waagent[2365]: 2026-01-14T00:02:59.665526Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 14 00:02:59.665627 waagent[2365]: Try `iptables -h' or 'iptables --help' for more information.) Jan 14 00:02:59.665917 waagent[2365]: 2026-01-14T00:02:59.665880Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: FBBBF54E-FC0D-4EB2-85F8-BE7AF704144D;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 14 00:02:59.681665 waagent[2365]: 2026-01-14T00:02:59.681305Z INFO MonitorHandler ExtHandler Network interfaces: Jan 14 00:02:59.681665 waagent[2365]: Executing ['ip', '-a', '-o', 'link']: Jan 14 00:02:59.681665 waagent[2365]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 14 00:02:59.681665 waagent[2365]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b9:d6:4a brd ff:ff:ff:ff:ff:ff\ altname enx002248b9d64a Jan 14 00:02:59.681665 waagent[2365]: 3: enP49516s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b9:d6:4a brd ff:ff:ff:ff:ff:ff\ altname enP49516p0s2 Jan 14 00:02:59.681665 waagent[2365]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 14 00:02:59.681665 waagent[2365]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 14 00:02:59.681665 waagent[2365]: 2: eth0 inet 10.200.20.29/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 14 00:02:59.681665 waagent[2365]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 14 00:02:59.681665 waagent[2365]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 14 00:02:59.681665 waagent[2365]: 2: eth0 inet6 fe80::222:48ff:feb9:d64a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 00:02:59.720191 waagent[2365]: 2026-01-14T00:02:59.720067Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 14 00:02:59.720191 waagent[2365]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 00:02:59.720191 waagent[2365]: pkts bytes target prot opt in out source destination Jan 14 00:02:59.720191 waagent[2365]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 00:02:59.720191 waagent[2365]: pkts bytes target prot opt in out source destination Jan 14 00:02:59.720191 waagent[2365]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 00:02:59.720191 waagent[2365]: pkts bytes target prot opt in out source destination Jan 14 00:02:59.720191 waagent[2365]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 00:02:59.720191 waagent[2365]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 00:02:59.720191 waagent[2365]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 00:02:59.722469 waagent[2365]: 2026-01-14T00:02:59.722422Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 14 00:02:59.722469 waagent[2365]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 00:02:59.722469 waagent[2365]: pkts bytes target prot opt in out source destination Jan 14 00:02:59.722469 waagent[2365]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 00:02:59.722469 waagent[2365]: pkts bytes target prot opt in out source destination Jan 14 00:02:59.722469 waagent[2365]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 00:02:59.722469 waagent[2365]: pkts bytes target prot opt in out source destination Jan 14 00:02:59.722469 waagent[2365]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 00:02:59.722469 waagent[2365]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 00:02:59.722469 waagent[2365]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 00:02:59.722649 waagent[2365]: 2026-01-14T00:02:59.722636Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 14 00:03:06.250500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 00:03:06.251865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:03:06.364755 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:03:06.371889 (kubelet)[2517]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:03:06.482090 kubelet[2517]: E0114 00:03:06.482026 2517 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:03:06.484892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:03:06.485022 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:03:06.485640 systemd[1]: kubelet.service: Consumed 116ms CPU time, 107.5M memory peak. Jan 14 00:03:16.647897 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 00:03:16.649755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:03:16.753513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:03:16.759579 (kubelet)[2531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:03:16.873088 kubelet[2531]: E0114 00:03:16.873037 2531 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:03:16.875503 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:03:16.875630 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:03:16.876327 systemd[1]: kubelet.service: Consumed 110ms CPU time, 105.2M memory peak. Jan 14 00:03:18.252057 chronyd[2078]: Selected source PHC0 Jan 14 00:03:26.897780 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 00:03:26.899297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:03:27.186784 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:03:27.192560 (kubelet)[2545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:03:27.220446 kubelet[2545]: E0114 00:03:27.220394 2545 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:03:27.222841 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:03:27.222963 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:03:27.225246 systemd[1]: kubelet.service: Consumed 111ms CPU time, 107M memory peak. Jan 14 00:03:29.733208 waagent[2365]: 2026-01-14T00:03:29.732650Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 14 00:03:29.741896 waagent[2365]: 2026-01-14T00:03:29.741852Z INFO ExtHandler Jan 14 00:03:29.741985 waagent[2365]: 2026-01-14T00:03:29.741960Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 70221e63-e376-4bfb-a854-cb8b0bce1b68 eTag: 8739030791730045986 source: Fabric] Jan 14 00:03:29.742314 waagent[2365]: 2026-01-14T00:03:29.742279Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 14 00:03:29.742825 waagent[2365]: 2026-01-14T00:03:29.742790Z INFO ExtHandler Jan 14 00:03:29.742872 waagent[2365]: 2026-01-14T00:03:29.742853Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 14 00:03:29.800529 waagent[2365]: 2026-01-14T00:03:29.800482Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 14 00:03:29.853203 waagent[2365]: 2026-01-14T00:03:29.853092Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B6CE785623268A1E4331BDB25F002F9F2F206734', 'hasPrivateKey': True} Jan 14 00:03:29.853632 waagent[2365]: 2026-01-14T00:03:29.853592Z INFO ExtHandler Fetch goal state completed Jan 14 00:03:29.853939 waagent[2365]: 2026-01-14T00:03:29.853906Z INFO ExtHandler ExtHandler Jan 14 00:03:29.853988 waagent[2365]: 2026-01-14T00:03:29.853969Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 4d8a0f02-94c2-4746-8be9-cf0892df5163 correlation 152f72e6-b35b-458c-a84b-dffbe7c65364 created: 2026-01-14T00:03:22.292351Z] Jan 14 00:03:29.854250 waagent[2365]: 2026-01-14T00:03:29.854220Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 14 00:03:29.854681 waagent[2365]: 2026-01-14T00:03:29.854650Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 14 00:03:33.653340 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 00:03:33.654819 systemd[1]: Started sshd@0-10.200.20.29:22-10.200.16.10:36262.service - OpenSSH per-connection server daemon (10.200.16.10:36262). Jan 14 00:03:34.229488 sshd[2558]: Accepted publickey for core from 10.200.16.10 port 36262 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:03:34.230828 sshd-session[2558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:03:34.235648 systemd-logind[2105]: New session 4 of user core. Jan 14 00:03:34.241353 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 00:03:34.551748 systemd[1]: Started sshd@1-10.200.20.29:22-10.200.16.10:36278.service - OpenSSH per-connection server daemon (10.200.16.10:36278). Jan 14 00:03:34.971963 sshd[2565]: Accepted publickey for core from 10.200.16.10 port 36278 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:03:34.972804 sshd-session[2565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:03:34.976731 systemd-logind[2105]: New session 5 of user core. Jan 14 00:03:34.985463 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 00:03:35.204452 sshd[2569]: Connection closed by 10.200.16.10 port 36278 Jan 14 00:03:35.205018 sshd-session[2565]: pam_unix(sshd:session): session closed for user core Jan 14 00:03:35.208942 systemd-logind[2105]: Session 5 logged out. Waiting for processes to exit. Jan 14 00:03:35.209325 systemd[1]: sshd@1-10.200.20.29:22-10.200.16.10:36278.service: Deactivated successfully. Jan 14 00:03:35.210969 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 00:03:35.212715 systemd-logind[2105]: Removed session 5. Jan 14 00:03:35.288051 systemd[1]: Started sshd@2-10.200.20.29:22-10.200.16.10:36294.service - OpenSSH per-connection server daemon (10.200.16.10:36294). Jan 14 00:03:35.679189 sshd[2575]: Accepted publickey for core from 10.200.16.10 port 36294 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:03:35.680155 sshd-session[2575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:03:35.684121 systemd-logind[2105]: New session 6 of user core. Jan 14 00:03:35.693556 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 00:03:35.893613 sshd[2579]: Connection closed by 10.200.16.10 port 36294 Jan 14 00:03:35.894193 sshd-session[2575]: pam_unix(sshd:session): session closed for user core Jan 14 00:03:35.898488 systemd[1]: sshd@2-10.200.20.29:22-10.200.16.10:36294.service: Deactivated successfully. Jan 14 00:03:35.900067 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 00:03:35.900755 systemd-logind[2105]: Session 6 logged out. Waiting for processes to exit. Jan 14 00:03:35.901826 systemd-logind[2105]: Removed session 6. Jan 14 00:03:35.981381 systemd[1]: Started sshd@3-10.200.20.29:22-10.200.16.10:36302.service - OpenSSH per-connection server daemon (10.200.16.10:36302). Jan 14 00:03:36.193272 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 14 00:03:36.402847 sshd[2585]: Accepted publickey for core from 10.200.16.10 port 36302 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:03:36.403681 sshd-session[2585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:03:36.408328 systemd-logind[2105]: New session 7 of user core. Jan 14 00:03:36.414456 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 00:03:36.636833 sshd[2589]: Connection closed by 10.200.16.10 port 36302 Jan 14 00:03:36.636725 sshd-session[2585]: pam_unix(sshd:session): session closed for user core Jan 14 00:03:36.642413 systemd[1]: sshd@3-10.200.20.29:22-10.200.16.10:36302.service: Deactivated successfully. Jan 14 00:03:36.644619 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 00:03:36.645870 systemd-logind[2105]: Session 7 logged out. Waiting for processes to exit. Jan 14 00:03:36.647623 systemd-logind[2105]: Removed session 7. Jan 14 00:03:36.725371 systemd[1]: Started sshd@4-10.200.20.29:22-10.200.16.10:36314.service - OpenSSH per-connection server daemon (10.200.16.10:36314). Jan 14 00:03:37.152498 sshd[2595]: Accepted publickey for core from 10.200.16.10 port 36314 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:03:37.153624 sshd-session[2595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:03:37.157806 systemd-logind[2105]: New session 8 of user core. Jan 14 00:03:37.160403 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 00:03:37.397721 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 00:03:37.399248 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:03:37.417864 sudo[2600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 00:03:37.418441 sudo[2600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 00:03:37.439916 sudo[2600]: pam_unix(sudo:session): session closed for user root Jan 14 00:03:37.517202 sshd[2599]: Connection closed by 10.200.16.10 port 36314 Jan 14 00:03:37.518127 sshd-session[2595]: pam_unix(sshd:session): session closed for user core Jan 14 00:03:37.523335 systemd[1]: sshd@4-10.200.20.29:22-10.200.16.10:36314.service: Deactivated successfully. Jan 14 00:03:37.527010 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 00:03:37.529358 systemd-logind[2105]: Session 8 logged out. Waiting for processes to exit. Jan 14 00:03:37.532373 systemd-logind[2105]: Removed session 8. Jan 14 00:03:37.572767 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:03:37.578574 (kubelet)[2614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:03:37.606452 systemd[1]: Started sshd@5-10.200.20.29:22-10.200.16.10:36326.service - OpenSSH per-connection server daemon (10.200.16.10:36326). Jan 14 00:03:37.609916 kubelet[2614]: E0114 00:03:37.609873 2614 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:03:37.612404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:03:37.612619 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:03:37.617565 systemd[1]: kubelet.service: Consumed 112ms CPU time, 106.8M memory peak. Jan 14 00:03:38.030833 sshd[2621]: Accepted publickey for core from 10.200.16.10 port 36326 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:03:38.031960 sshd-session[2621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:03:38.036285 systemd-logind[2105]: New session 9 of user core. Jan 14 00:03:38.046369 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 00:03:38.191474 sudo[2628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 00:03:38.191696 sudo[2628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 00:03:38.197123 sudo[2628]: pam_unix(sudo:session): session closed for user root Jan 14 00:03:38.202726 sudo[2627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 00:03:38.202934 sudo[2627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 00:03:38.209789 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 00:03:38.251197 kernel: kauditd_printk_skb: 147 callbacks suppressed Jan 14 00:03:38.251328 kernel: audit: type=1305 audit(1768349018.244:249): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 00:03:38.244000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 00:03:38.244000 audit[2652]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff528bb00 a2=420 a3=0 items=0 ppid=2633 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:38.257528 augenrules[2652]: No rules Jan 14 00:03:38.274878 kernel: audit: type=1300 audit(1768349018.244:249): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff528bb00 a2=420 a3=0 items=0 ppid=2633 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:38.275315 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 00:03:38.275538 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 00:03:38.244000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 00:03:38.280991 sudo[2627]: pam_unix(sudo:session): session closed for user root Jan 14 00:03:38.283499 kernel: audit: type=1327 audit(1768349018.244:249): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 00:03:38.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:03:38.296057 kernel: audit: type=1130 audit(1768349018.276:250): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:03:38.296098 kernel: audit: type=1131 audit(1768349018.276:251): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:03:38.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:03:38.279000 audit[2627]: USER_END pid=2627 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:03:38.322547 kernel: audit: type=1106 audit(1768349018.279:252): pid=2627 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:03:38.322600 kernel: audit: type=1104 audit(1768349018.279:253): pid=2627 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:03:38.279000 audit[2627]: CRED_DISP pid=2627 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:03:38.359268 sshd[2626]: Connection closed by 10.200.16.10 port 36326 Jan 14 00:03:38.360373 sshd-session[2621]: pam_unix(sshd:session): session closed for user core Jan 14 00:03:38.360000 audit[2621]: USER_END pid=2621 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:03:38.383006 systemd[1]: sshd@5-10.200.20.29:22-10.200.16.10:36326.service: Deactivated successfully. Jan 14 00:03:38.389524 kernel: audit: type=1106 audit(1768349018.360:254): pid=2621 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:03:38.389612 kernel: audit: type=1104 audit(1768349018.360:255): pid=2621 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:03:38.360000 audit[2621]: CRED_DISP pid=2621 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:03:38.392095 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 00:03:38.393208 systemd-logind[2105]: Session 9 logged out. Waiting for processes to exit. Jan 14 00:03:38.395978 systemd-logind[2105]: Removed session 9. Jan 14 00:03:38.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.29:22-10.200.16.10:36326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:03:38.411523 kernel: audit: type=1131 audit(1768349018.388:256): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.29:22-10.200.16.10:36326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:03:38.445916 systemd[1]: Started sshd@6-10.200.20.29:22-10.200.16.10:36330.service - OpenSSH per-connection server daemon (10.200.16.10:36330). Jan 14 00:03:38.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.29:22-10.200.16.10:36330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:03:38.866000 audit[2661]: USER_ACCT pid=2661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:03:38.867976 sshd[2661]: Accepted publickey for core from 10.200.16.10 port 36330 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:03:38.867000 audit[2661]: CRED_ACQ pid=2661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:03:38.867000 audit[2661]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc28252e0 a2=3 a3=0 items=0 ppid=1 pid=2661 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:38.867000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:03:38.869516 sshd-session[2661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:03:38.873327 systemd-logind[2105]: New session 10 of user core. Jan 14 00:03:38.881329 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 00:03:38.883000 audit[2661]: USER_START pid=2661 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:03:38.884000 audit[2665]: CRED_ACQ pid=2665 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:03:39.027119 sudo[2666]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 00:03:39.025000 audit[2666]: USER_ACCT pid=2666 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:03:39.026000 audit[2666]: CRED_REFR pid=2666 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:03:39.026000 audit[2666]: USER_START pid=2666 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:03:39.027384 sudo[2666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 00:03:40.081803 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 00:03:40.090437 (dockerd)[2685]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 00:03:40.161945 update_engine[2111]: I20260114 00:03:40.161470 2111 update_attempter.cc:509] Updating boot flags... Jan 14 00:03:41.093204 dockerd[2685]: time="2026-01-14T00:03:41.093076291Z" level=info msg="Starting up" Jan 14 00:03:41.094284 dockerd[2685]: time="2026-01-14T00:03:41.094261119Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 00:03:41.103231 dockerd[2685]: time="2026-01-14T00:03:41.103119182Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 00:03:41.137591 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1978554368-merged.mount: Deactivated successfully. Jan 14 00:03:41.195015 dockerd[2685]: time="2026-01-14T00:03:41.194965965Z" level=info msg="Loading containers: start." Jan 14 00:03:41.207210 kernel: Initializing XFRM netlink socket Jan 14 00:03:41.272000 audit[2795]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2795 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.272000 audit[2795]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe46f7ae0 a2=0 a3=0 items=0 ppid=2685 pid=2795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.272000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 00:03:41.274000 audit[2797]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2797 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.274000 audit[2797]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd82c2b80 a2=0 a3=0 items=0 ppid=2685 pid=2797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.274000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 00:03:41.276000 audit[2799]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2799 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.276000 audit[2799]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe51300c0 a2=0 a3=0 items=0 ppid=2685 pid=2799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.276000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 00:03:41.278000 audit[2801]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2801 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.278000 audit[2801]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd729f7a0 a2=0 a3=0 items=0 ppid=2685 pid=2801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.278000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 00:03:41.279000 audit[2803]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=2803 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.279000 audit[2803]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd632eb10 a2=0 a3=0 items=0 ppid=2685 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.279000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 00:03:41.281000 audit[2805]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=2805 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.281000 audit[2805]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffb73a400 a2=0 a3=0 items=0 ppid=2685 pid=2805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.281000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 00:03:41.283000 audit[2807]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2807 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.283000 audit[2807]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffcd26e8c0 a2=0 a3=0 items=0 ppid=2685 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.283000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 00:03:41.285000 audit[2809]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=2809 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.285000 audit[2809]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffd3bcc5c0 a2=0 a3=0 items=0 ppid=2685 pid=2809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.285000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 00:03:41.397000 audit[2812]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2812 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.397000 audit[2812]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffc23aa310 a2=0 a3=0 items=0 ppid=2685 pid=2812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.397000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 00:03:41.399000 audit[2814]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=2814 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.399000 audit[2814]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffeefbb590 a2=0 a3=0 items=0 ppid=2685 pid=2814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.399000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 00:03:41.401000 audit[2816]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2816 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.401000 audit[2816]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffdb3b4c70 a2=0 a3=0 items=0 ppid=2685 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.401000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 00:03:41.402000 audit[2818]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2818 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.402000 audit[2818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd3448a10 a2=0 a3=0 items=0 ppid=2685 pid=2818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.402000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 00:03:41.404000 audit[2820]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=2820 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.404000 audit[2820]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffff15efd70 a2=0 a3=0 items=0 ppid=2685 pid=2820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.404000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 00:03:41.479000 audit[2850]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=2850 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:03:41.479000 audit[2850]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff6608df0 a2=0 a3=0 items=0 ppid=2685 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.479000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 00:03:41.481000 audit[2852]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=2852 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:03:41.481000 audit[2852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd90e3290 a2=0 a3=0 items=0 ppid=2685 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.481000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 00:03:41.483000 audit[2854]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2854 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:03:41.483000 audit[2854]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6ccc6f0 a2=0 a3=0 items=0 ppid=2685 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.483000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 00:03:41.484000 audit[2856]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2856 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:03:41.484000 audit[2856]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdc68b6a0 a2=0 a3=0 items=0 ppid=2685 pid=2856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.484000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 00:03:41.486000 audit[2858]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=2858 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:03:41.486000 audit[2858]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd3d1ad50 a2=0 a3=0 items=0 ppid=2685 pid=2858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.486000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 00:03:41.488000 audit[2860]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=2860 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:03:41.488000 audit[2860]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc0dd4760 a2=0 a3=0 items=0 ppid=2685 pid=2860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.488000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 00:03:41.489000 audit[2862]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=2862 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:03:41.489000 audit[2862]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe8924710 a2=0 a3=0 items=0 ppid=2685 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.489000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 00:03:41.491000 audit[2864]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=2864 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:03:41.491000 audit[2864]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffd92e5890 a2=0 a3=0 items=0 ppid=2685 pid=2864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.491000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 00:03:41.493000 audit[2866]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=2866 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:03:41.493000 audit[2866]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffc742d0f0 a2=0 a3=0 items=0 ppid=2685 pid=2866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.493000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 00:03:41.495000 audit[2868]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=2868 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:03:41.495000 audit[2868]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc5835f10 a2=0 a3=0 items=0 ppid=2685 pid=2868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.495000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 00:03:41.496000 audit[2870]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=2870 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:03:41.496000 audit[2870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffcf0c3940 a2=0 a3=0 items=0 ppid=2685 pid=2870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.496000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 00:03:41.498000 audit[2872]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=2872 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:03:41.498000 audit[2872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc29b2410 a2=0 a3=0 items=0 ppid=2685 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.498000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 00:03:41.499000 audit[2874]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=2874 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:03:41.499000 audit[2874]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffe403dad0 a2=0 a3=0 items=0 ppid=2685 pid=2874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.499000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 00:03:41.504000 audit[2879]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=2879 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.504000 audit[2879]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff5590610 a2=0 a3=0 items=0 ppid=2685 pid=2879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.504000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 00:03:41.505000 audit[2881]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2881 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.505000 audit[2881]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc16fee90 a2=0 a3=0 items=0 ppid=2685 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.505000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 00:03:41.507000 audit[2883]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2883 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.507000 audit[2883]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd3a6edb0 a2=0 a3=0 items=0 ppid=2685 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.507000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 00:03:41.509000 audit[2885]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2885 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:03:41.509000 audit[2885]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffed93b580 a2=0 a3=0 items=0 ppid=2685 pid=2885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.509000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 00:03:41.510000 audit[2887]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=2887 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:03:41.510000 audit[2887]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffcba3f610 a2=0 a3=0 items=0 ppid=2685 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.510000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 00:03:41.512000 audit[2889]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=2889 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:03:41.512000 audit[2889]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd414eb70 a2=0 a3=0 items=0 ppid=2685 pid=2889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.512000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 00:03:41.567000 audit[2893]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=2893 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.567000 audit[2893]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffe9c29b90 a2=0 a3=0 items=0 ppid=2685 pid=2893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.567000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 00:03:41.569000 audit[2895]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=2895 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.569000 audit[2895]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd1c307a0 a2=0 a3=0 items=0 ppid=2685 pid=2895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.569000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 00:03:41.576000 audit[2903]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.576000 audit[2903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=fffff8ea4480 a2=0 a3=0 items=0 ppid=2685 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.576000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 00:03:41.580000 audit[2908]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2908 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.580000 audit[2908]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffe025f230 a2=0 a3=0 items=0 ppid=2685 pid=2908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.580000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 00:03:41.582000 audit[2910]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.582000 audit[2910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffdff926b0 a2=0 a3=0 items=0 ppid=2685 pid=2910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.582000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 00:03:41.584000 audit[2912]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.584000 audit[2912]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff86ad510 a2=0 a3=0 items=0 ppid=2685 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.584000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 00:03:41.586000 audit[2914]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=2914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.586000 audit[2914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffe2cca8a0 a2=0 a3=0 items=0 ppid=2685 pid=2914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.586000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 00:03:41.587000 audit[2916]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=2916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:41.587000 audit[2916]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff73f2c80 a2=0 a3=0 items=0 ppid=2685 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:41.587000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 00:03:41.589664 systemd-networkd[1723]: docker0: Link UP Jan 14 00:03:41.605070 dockerd[2685]: time="2026-01-14T00:03:41.604953514Z" level=info msg="Loading containers: done." Jan 14 00:03:41.667214 dockerd[2685]: time="2026-01-14T00:03:41.666941374Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 00:03:41.667214 dockerd[2685]: time="2026-01-14T00:03:41.667042065Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 00:03:41.667605 dockerd[2685]: time="2026-01-14T00:03:41.667500028Z" level=info msg="Initializing buildkit" Jan 14 00:03:41.714486 dockerd[2685]: time="2026-01-14T00:03:41.714443060Z" level=info msg="Completed buildkit initialization" Jan 14 00:03:41.718757 dockerd[2685]: time="2026-01-14T00:03:41.718708508Z" level=info msg="Daemon has completed initialization" Jan 14 00:03:41.719598 dockerd[2685]: time="2026-01-14T00:03:41.719075244Z" level=info msg="API listen on /run/docker.sock" Jan 14 00:03:41.719310 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 00:03:41.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:03:42.594982 containerd[2133]: time="2026-01-14T00:03:42.594942819Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 14 00:03:43.369917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount269373871.mount: Deactivated successfully. Jan 14 00:03:44.267096 containerd[2133]: time="2026-01-14T00:03:44.267030351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:03:44.270065 containerd[2133]: time="2026-01-14T00:03:44.269993447Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=24977719" Jan 14 00:03:44.273430 containerd[2133]: time="2026-01-14T00:03:44.273384577Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:03:44.277962 containerd[2133]: time="2026-01-14T00:03:44.277916479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:03:44.278599 containerd[2133]: time="2026-01-14T00:03:44.278571903Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 1.683186081s" Jan 14 00:03:44.278640 containerd[2133]: time="2026-01-14T00:03:44.278606616Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 14 00:03:44.279587 containerd[2133]: time="2026-01-14T00:03:44.279560775Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 14 00:03:45.497217 containerd[2133]: time="2026-01-14T00:03:45.496806634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:03:45.499879 containerd[2133]: time="2026-01-14T00:03:45.499668335Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22613932" Jan 14 00:03:45.503107 containerd[2133]: time="2026-01-14T00:03:45.503076930Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:03:45.507976 containerd[2133]: time="2026-01-14T00:03:45.507936591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:03:45.508732 containerd[2133]: time="2026-01-14T00:03:45.508699474Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.229108947s" Jan 14 00:03:45.508835 containerd[2133]: time="2026-01-14T00:03:45.508821101Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 14 00:03:45.509735 containerd[2133]: time="2026-01-14T00:03:45.509719939Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 14 00:03:46.712582 containerd[2133]: time="2026-01-14T00:03:46.712524856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:03:46.716362 containerd[2133]: time="2026-01-14T00:03:46.716206961Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17608611" Jan 14 00:03:46.721682 containerd[2133]: time="2026-01-14T00:03:46.721438088Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:03:46.725581 containerd[2133]: time="2026-01-14T00:03:46.725442785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:03:46.726302 containerd[2133]: time="2026-01-14T00:03:46.725966838Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.216095888s" Jan 14 00:03:46.726302 containerd[2133]: time="2026-01-14T00:03:46.725999367Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 14 00:03:46.727115 containerd[2133]: time="2026-01-14T00:03:46.727089153Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 14 00:03:47.647704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 14 00:03:47.650242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:03:47.687278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount867583712.mount: Deactivated successfully. Jan 14 00:03:47.764155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:03:47.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:03:47.767623 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 14 00:03:47.767710 kernel: audit: type=1130 audit(1768349027.764:307): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:03:47.785448 (kubelet)[3032]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:03:47.809870 kubelet[3032]: E0114 00:03:47.809796 3032 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:03:47.811873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:03:47.811994 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:03:47.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 00:03:47.812621 systemd[1]: kubelet.service: Consumed 112ms CPU time, 106.7M memory peak. Jan 14 00:03:47.826193 kernel: audit: type=1131 audit(1768349027.812:308): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 00:03:49.010374 containerd[2133]: time="2026-01-14T00:03:49.010177940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:03:49.013686 containerd[2133]: time="2026-01-14T00:03:49.013642144Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27555645" Jan 14 00:03:49.016736 containerd[2133]: time="2026-01-14T00:03:49.016690810Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:03:49.020938 containerd[2133]: time="2026-01-14T00:03:49.020902968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:03:49.021563 containerd[2133]: time="2026-01-14T00:03:49.021137574Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 2.294021677s" Jan 14 00:03:49.021563 containerd[2133]: time="2026-01-14T00:03:49.021173471Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 14 00:03:49.021792 containerd[2133]: time="2026-01-14T00:03:49.021756565Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 14 00:03:49.744055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount94050368.mount: Deactivated successfully. Jan 14 00:03:50.504188 containerd[2133]: time="2026-01-14T00:03:50.504105760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:03:50.507741 containerd[2133]: time="2026-01-14T00:03:50.507686386Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=84007" Jan 14 00:03:50.511395 containerd[2133]: time="2026-01-14T00:03:50.511349558Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:03:50.516491 containerd[2133]: time="2026-01-14T00:03:50.516319799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:03:50.517718 containerd[2133]: time="2026-01-14T00:03:50.517689118Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.495912385s" Jan 14 00:03:50.517817 containerd[2133]: time="2026-01-14T00:03:50.517805361Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 14 00:03:50.518435 containerd[2133]: time="2026-01-14T00:03:50.518265564Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 00:03:51.068005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4101016484.mount: Deactivated successfully. Jan 14 00:03:51.088939 containerd[2133]: time="2026-01-14T00:03:51.088401807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:03:51.091430 containerd[2133]: time="2026-01-14T00:03:51.091388076Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 00:03:51.094248 containerd[2133]: time="2026-01-14T00:03:51.094226684Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:03:51.098443 containerd[2133]: time="2026-01-14T00:03:51.098402628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:03:51.098954 containerd[2133]: time="2026-01-14T00:03:51.098926112Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 580.35091ms" Jan 14 00:03:51.099043 containerd[2133]: time="2026-01-14T00:03:51.099029314Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 14 00:03:51.099642 containerd[2133]: time="2026-01-14T00:03:51.099618712Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 14 00:03:51.698085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2517835552.mount: Deactivated successfully. Jan 14 00:03:54.174063 containerd[2133]: time="2026-01-14T00:03:54.173329997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:03:54.217848 containerd[2133]: time="2026-01-14T00:03:54.217780869Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=66060526" Jan 14 00:03:54.222188 containerd[2133]: time="2026-01-14T00:03:54.222153849Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:03:54.226794 containerd[2133]: time="2026-01-14T00:03:54.226753002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:03:54.228012 containerd[2133]: time="2026-01-14T00:03:54.227604654Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.127957894s" Jan 14 00:03:54.228012 containerd[2133]: time="2026-01-14T00:03:54.227640703Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 14 00:03:56.637867 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:03:56.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:03:56.638352 systemd[1]: kubelet.service: Consumed 112ms CPU time, 106.7M memory peak. Jan 14 00:03:56.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:03:56.651447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:03:56.662769 kernel: audit: type=1130 audit(1768349036.637:309): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:03:56.662861 kernel: audit: type=1131 audit(1768349036.637:310): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:03:56.682635 systemd[1]: Reload requested from client PID 3182 ('systemctl') (unit session-10.scope)... Jan 14 00:03:56.682649 systemd[1]: Reloading... Jan 14 00:03:56.780224 zram_generator::config[3239]: No configuration found. Jan 14 00:03:56.942655 systemd[1]: Reloading finished in 259 ms. Jan 14 00:03:56.969000 audit: BPF prog-id=87 op=LOAD Jan 14 00:03:56.979270 kernel: audit: type=1334 audit(1768349036.969:311): prog-id=87 op=LOAD Jan 14 00:03:56.979375 kernel: audit: type=1334 audit(1768349036.972:312): prog-id=80 op=UNLOAD Jan 14 00:03:56.972000 audit: BPF prog-id=80 op=UNLOAD Jan 14 00:03:56.973000 audit: BPF prog-id=88 op=LOAD Jan 14 00:03:56.983270 kernel: audit: type=1334 audit(1768349036.973:313): prog-id=88 op=LOAD Jan 14 00:03:56.973000 audit: BPF prog-id=89 op=LOAD Jan 14 00:03:56.987564 kernel: audit: type=1334 audit(1768349036.973:314): prog-id=89 op=LOAD Jan 14 00:03:56.973000 audit: BPF prog-id=85 op=UNLOAD Jan 14 00:03:56.991768 kernel: audit: type=1334 audit(1768349036.973:315): prog-id=85 op=UNLOAD Jan 14 00:03:56.973000 audit: BPF prog-id=86 op=UNLOAD Jan 14 00:03:56.996144 kernel: audit: type=1334 audit(1768349036.973:316): prog-id=86 op=UNLOAD Jan 14 00:03:56.996214 kernel: audit: type=1334 audit(1768349036.982:317): prog-id=90 op=LOAD Jan 14 00:03:56.982000 audit: BPF prog-id=90 op=LOAD Jan 14 00:03:56.982000 audit: BPF prog-id=84 op=UNLOAD Jan 14 00:03:57.005271 kernel: audit: type=1334 audit(1768349036.982:318): prog-id=84 op=UNLOAD Jan 14 00:03:56.998000 audit: BPF prog-id=91 op=LOAD Jan 14 00:03:56.998000 audit: BPF prog-id=79 op=UNLOAD Jan 14 00:03:57.000000 audit: BPF prog-id=92 op=LOAD Jan 14 00:03:57.000000 audit: BPF prog-id=81 op=UNLOAD Jan 14 00:03:57.004000 audit: BPF prog-id=93 op=LOAD Jan 14 00:03:57.004000 audit: BPF prog-id=94 op=LOAD Jan 14 00:03:57.004000 audit: BPF prog-id=82 op=UNLOAD Jan 14 00:03:57.004000 audit: BPF prog-id=83 op=UNLOAD Jan 14 00:03:57.005000 audit: BPF prog-id=95 op=LOAD Jan 14 00:03:57.005000 audit: BPF prog-id=70 op=UNLOAD Jan 14 00:03:57.005000 audit: BPF prog-id=96 op=LOAD Jan 14 00:03:57.005000 audit: BPF prog-id=97 op=LOAD Jan 14 00:03:57.005000 audit: BPF prog-id=71 op=UNLOAD Jan 14 00:03:57.005000 audit: BPF prog-id=72 op=UNLOAD Jan 14 00:03:57.006000 audit: BPF prog-id=98 op=LOAD Jan 14 00:03:57.006000 audit: BPF prog-id=67 op=UNLOAD Jan 14 00:03:57.006000 audit: BPF prog-id=99 op=LOAD Jan 14 00:03:57.006000 audit: BPF prog-id=100 op=LOAD Jan 14 00:03:57.006000 audit: BPF prog-id=68 op=UNLOAD Jan 14 00:03:57.006000 audit: BPF prog-id=69 op=UNLOAD Jan 14 00:03:57.007000 audit: BPF prog-id=101 op=LOAD Jan 14 00:03:57.007000 audit: BPF prog-id=73 op=UNLOAD Jan 14 00:03:57.007000 audit: BPF prog-id=102 op=LOAD Jan 14 00:03:57.007000 audit: BPF prog-id=103 op=LOAD Jan 14 00:03:57.007000 audit: BPF prog-id=74 op=UNLOAD Jan 14 00:03:57.007000 audit: BPF prog-id=75 op=UNLOAD Jan 14 00:03:57.008000 audit: BPF prog-id=104 op=LOAD Jan 14 00:03:57.008000 audit: BPF prog-id=76 op=UNLOAD Jan 14 00:03:57.008000 audit: BPF prog-id=105 op=LOAD Jan 14 00:03:57.008000 audit: BPF prog-id=106 op=LOAD Jan 14 00:03:57.008000 audit: BPF prog-id=77 op=UNLOAD Jan 14 00:03:57.008000 audit: BPF prog-id=78 op=UNLOAD Jan 14 00:03:57.021559 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 00:03:57.021622 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 00:03:57.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 00:03:57.022065 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:03:57.022149 systemd[1]: kubelet.service: Consumed 78ms CPU time, 95.2M memory peak. Jan 14 00:03:57.023800 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:03:57.379893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:03:57.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:03:57.391660 (kubelet)[3299]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 00:03:57.420520 kubelet[3299]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 00:03:57.420893 kubelet[3299]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 00:03:57.420936 kubelet[3299]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 00:03:57.421066 kubelet[3299]: I0114 00:03:57.421032 3299 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 00:03:57.752638 kubelet[3299]: I0114 00:03:57.752586 3299 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 14 00:03:57.752638 kubelet[3299]: I0114 00:03:57.752626 3299 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 00:03:57.752871 kubelet[3299]: I0114 00:03:57.752850 3299 server.go:954] "Client rotation is on, will bootstrap in background" Jan 14 00:03:57.780563 kubelet[3299]: E0114 00:03:57.780513 3299 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.29:6443: connect: connection refused" logger="UnhandledError" Jan 14 00:03:57.782971 kubelet[3299]: I0114 00:03:57.782847 3299 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 00:03:57.790389 kubelet[3299]: I0114 00:03:57.790287 3299 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 00:03:57.793640 kubelet[3299]: I0114 00:03:57.793613 3299 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 00:03:57.794467 kubelet[3299]: I0114 00:03:57.794422 3299 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 00:03:57.794620 kubelet[3299]: I0114 00:03:57.794470 3299 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-n-d5ef04779b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 00:03:57.794713 kubelet[3299]: I0114 00:03:57.794628 3299 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 00:03:57.794713 kubelet[3299]: I0114 00:03:57.794636 3299 container_manager_linux.go:304] "Creating device plugin manager" Jan 14 00:03:57.794797 kubelet[3299]: I0114 00:03:57.794780 3299 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:03:57.797303 kubelet[3299]: I0114 00:03:57.797281 3299 kubelet.go:446] "Attempting to sync node with API server" Jan 14 00:03:57.797347 kubelet[3299]: I0114 00:03:57.797308 3299 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 00:03:57.797347 kubelet[3299]: I0114 00:03:57.797332 3299 kubelet.go:352] "Adding apiserver pod source" Jan 14 00:03:57.797347 kubelet[3299]: I0114 00:03:57.797341 3299 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 00:03:57.800350 kubelet[3299]: W0114 00:03:57.800297 3299 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.29:6443: connect: connection refused Jan 14 00:03:57.800458 kubelet[3299]: E0114 00:03:57.800368 3299 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.29:6443: connect: connection refused" logger="UnhandledError" Jan 14 00:03:57.802193 kubelet[3299]: W0114 00:03:57.801803 3299 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.0.0-n-d5ef04779b&limit=500&resourceVersion=0": dial tcp 10.200.20.29:6443: connect: connection refused Jan 14 00:03:57.802193 kubelet[3299]: E0114 00:03:57.801839 3299 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.0.0-n-d5ef04779b&limit=500&resourceVersion=0\": dial tcp 10.200.20.29:6443: connect: connection refused" logger="UnhandledError" Jan 14 00:03:57.802193 kubelet[3299]: I0114 00:03:57.801928 3299 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 00:03:57.802300 kubelet[3299]: I0114 00:03:57.802274 3299 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 00:03:57.802342 kubelet[3299]: W0114 00:03:57.802329 3299 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 00:03:57.802817 kubelet[3299]: I0114 00:03:57.802786 3299 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 00:03:57.802817 kubelet[3299]: I0114 00:03:57.802820 3299 server.go:1287] "Started kubelet" Jan 14 00:03:57.804207 kubelet[3299]: I0114 00:03:57.804153 3299 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 00:03:57.804951 kubelet[3299]: I0114 00:03:57.804931 3299 server.go:479] "Adding debug handlers to kubelet server" Jan 14 00:03:57.806761 kubelet[3299]: I0114 00:03:57.806735 3299 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 00:03:57.807364 kubelet[3299]: I0114 00:03:57.807303 3299 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 00:03:57.807550 kubelet[3299]: I0114 00:03:57.807530 3299 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 00:03:57.809477 kubelet[3299]: E0114 00:03:57.809373 3299 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.29:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.29:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547.0.0-n-d5ef04779b.188a7014ad23d738 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.0.0-n-d5ef04779b,UID:ci-4547.0.0-n-d5ef04779b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.0.0-n-d5ef04779b,},FirstTimestamp:2026-01-14 00:03:57.802805048 +0000 UTC m=+0.408069741,LastTimestamp:2026-01-14 00:03:57.802805048 +0000 UTC m=+0.408069741,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.0.0-n-d5ef04779b,}" Jan 14 00:03:57.809000 audit[3311]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3311 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:57.809000 audit[3311]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe5ecad90 a2=0 a3=0 items=0 ppid=3299 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:57.809000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 00:03:57.810845 kubelet[3299]: E0114 00:03:57.810801 3299 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 00:03:57.810962 kubelet[3299]: I0114 00:03:57.810935 3299 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 00:03:57.810000 audit[3312]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3312 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:57.810000 audit[3312]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdc506d40 a2=0 a3=0 items=0 ppid=3299 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:57.810000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 00:03:57.812879 kubelet[3299]: I0114 00:03:57.812862 3299 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 00:03:57.813118 kubelet[3299]: E0114 00:03:57.813083 3299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-n-d5ef04779b\" not found" Jan 14 00:03:57.813358 kubelet[3299]: E0114 00:03:57.813310 3299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-n-d5ef04779b?timeout=10s\": dial tcp 10.200.20.29:6443: connect: connection refused" interval="200ms" Jan 14 00:03:57.813764 kubelet[3299]: I0114 00:03:57.813730 3299 factory.go:221] Registration of the systemd container factory successfully Jan 14 00:03:57.813846 kubelet[3299]: I0114 00:03:57.813816 3299 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 00:03:57.812000 audit[3314]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3314 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:57.812000 audit[3314]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff62200a0 a2=0 a3=0 items=0 ppid=3299 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:57.812000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 00:03:57.814600 kubelet[3299]: I0114 00:03:57.814577 3299 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 00:03:57.814931 kubelet[3299]: W0114 00:03:57.814888 3299 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.29:6443: connect: connection refused Jan 14 00:03:57.814987 kubelet[3299]: E0114 00:03:57.814934 3299 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.29:6443: connect: connection refused" logger="UnhandledError" Jan 14 00:03:57.815008 kubelet[3299]: I0114 00:03:57.814988 3299 reconciler.go:26] "Reconciler: start to sync state" Jan 14 00:03:57.815446 kubelet[3299]: I0114 00:03:57.815415 3299 factory.go:221] Registration of the containerd container factory successfully Jan 14 00:03:57.815000 audit[3316]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3316 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:57.815000 audit[3316]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffffd86c730 a2=0 a3=0 items=0 ppid=3299 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:57.815000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 00:03:57.829000 audit[3320]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:57.829000 audit[3320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffd6fdeb30 a2=0 a3=0 items=0 ppid=3299 pid=3320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:57.829000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 00:03:57.831419 kubelet[3299]: I0114 00:03:57.831378 3299 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 00:03:57.831000 audit[3322]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3322 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:03:57.831000 audit[3322]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc94272c0 a2=0 a3=0 items=0 ppid=3299 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:57.831000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 00:03:57.831000 audit[3323]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3323 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:57.833016 kubelet[3299]: I0114 00:03:57.832996 3299 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 00:03:57.833090 kubelet[3299]: I0114 00:03:57.833082 3299 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 14 00:03:57.833218 kubelet[3299]: I0114 00:03:57.833149 3299 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 00:03:57.833280 kubelet[3299]: I0114 00:03:57.833269 3299 kubelet.go:2382] "Starting kubelet main sync loop" Jan 14 00:03:57.833365 kubelet[3299]: E0114 00:03:57.833349 3299 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 00:03:57.831000 audit[3323]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc47f4b70 a2=0 a3=0 items=0 ppid=3299 pid=3323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:57.831000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 00:03:57.833000 audit[3324]: NETFILTER_CFG table=mangle:52 family=10 entries=1 op=nft_register_chain pid=3324 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:03:57.833000 audit[3324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe34284f0 a2=0 a3=0 items=0 ppid=3299 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:57.833000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 00:03:57.833000 audit[3325]: NETFILTER_CFG table=nat:53 family=2 entries=1 op=nft_register_chain pid=3325 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:57.833000 audit[3325]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc9f95900 a2=0 a3=0 items=0 ppid=3299 pid=3325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:57.834000 audit[3326]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_chain pid=3326 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:03:57.834000 audit[3326]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffe179860 a2=0 a3=0 items=0 ppid=3299 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:57.834000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 00:03:57.833000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 00:03:57.835000 audit[3327]: NETFILTER_CFG table=filter:55 family=10 entries=1 op=nft_register_chain pid=3327 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:03:57.835000 audit[3327]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe7ef16d0 a2=0 a3=0 items=0 ppid=3299 pid=3327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:57.835000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 00:03:57.835000 audit[3328]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3328 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:03:57.835000 audit[3328]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeb38f900 a2=0 a3=0 items=0 ppid=3299 pid=3328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:57.835000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 00:03:57.837355 kubelet[3299]: I0114 00:03:57.837336 3299 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 00:03:57.837452 kubelet[3299]: I0114 00:03:57.837441 3299 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 00:03:57.837501 kubelet[3299]: I0114 00:03:57.837493 3299 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:03:57.838995 kubelet[3299]: W0114 00:03:57.838789 3299 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.29:6443: connect: connection refused Jan 14 00:03:57.838995 kubelet[3299]: E0114 00:03:57.838839 3299 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.29:6443: connect: connection refused" logger="UnhandledError" Jan 14 00:03:57.849607 kubelet[3299]: I0114 00:03:57.849578 3299 policy_none.go:49] "None policy: Start" Jan 14 00:03:57.849761 kubelet[3299]: I0114 00:03:57.849751 3299 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 00:03:57.849857 kubelet[3299]: I0114 00:03:57.849849 3299 state_mem.go:35] "Initializing new in-memory state store" Jan 14 00:03:57.859338 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 00:03:57.873585 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 00:03:57.883032 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 00:03:57.885187 kubelet[3299]: I0114 00:03:57.884216 3299 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 00:03:57.885187 kubelet[3299]: I0114 00:03:57.884405 3299 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 00:03:57.885187 kubelet[3299]: I0114 00:03:57.884417 3299 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 00:03:57.885187 kubelet[3299]: I0114 00:03:57.884992 3299 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 00:03:57.886184 kubelet[3299]: E0114 00:03:57.886143 3299 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 00:03:57.886343 kubelet[3299]: E0114 00:03:57.886327 3299 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547.0.0-n-d5ef04779b\" not found" Jan 14 00:03:57.943576 systemd[1]: Created slice kubepods-burstable-pod0047a24ac6df07bed92220edb44efee1.slice - libcontainer container kubepods-burstable-pod0047a24ac6df07bed92220edb44efee1.slice. Jan 14 00:03:57.959681 kubelet[3299]: E0114 00:03:57.959651 3299 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-n-d5ef04779b\" not found" node="ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:57.963037 systemd[1]: Created slice kubepods-burstable-pod715bae0514390aba840e0b9cb9df2130.slice - libcontainer container kubepods-burstable-pod715bae0514390aba840e0b9cb9df2130.slice. Jan 14 00:03:57.964696 kubelet[3299]: E0114 00:03:57.964674 3299 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-n-d5ef04779b\" not found" node="ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:57.967351 systemd[1]: Created slice kubepods-burstable-pod9d63792101fea76fa98dee1a5c524903.slice - libcontainer container kubepods-burstable-pod9d63792101fea76fa98dee1a5c524903.slice. Jan 14 00:03:57.968662 kubelet[3299]: E0114 00:03:57.968643 3299 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-n-d5ef04779b\" not found" node="ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:57.988702 kubelet[3299]: I0114 00:03:57.988653 3299 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:57.989380 kubelet[3299]: E0114 00:03:57.989348 3299 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.29:6443/api/v1/nodes\": dial tcp 10.200.20.29:6443: connect: connection refused" node="ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:58.013852 kubelet[3299]: E0114 00:03:58.013730 3299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-n-d5ef04779b?timeout=10s\": dial tcp 10.200.20.29:6443: connect: connection refused" interval="400ms" Jan 14 00:03:58.015998 kubelet[3299]: I0114 00:03:58.015967 3299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/715bae0514390aba840e0b9cb9df2130-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-n-d5ef04779b\" (UID: \"715bae0514390aba840e0b9cb9df2130\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:58.016066 kubelet[3299]: I0114 00:03:58.016007 3299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/715bae0514390aba840e0b9cb9df2130-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-n-d5ef04779b\" (UID: \"715bae0514390aba840e0b9cb9df2130\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:58.016066 kubelet[3299]: I0114 00:03:58.016032 3299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0047a24ac6df07bed92220edb44efee1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-n-d5ef04779b\" (UID: \"0047a24ac6df07bed92220edb44efee1\") " pod="kube-system/kube-apiserver-ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:58.016066 kubelet[3299]: I0114 00:03:58.016044 3299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0047a24ac6df07bed92220edb44efee1-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-n-d5ef04779b\" (UID: \"0047a24ac6df07bed92220edb44efee1\") " pod="kube-system/kube-apiserver-ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:58.016066 kubelet[3299]: I0114 00:03:58.016055 3299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/715bae0514390aba840e0b9cb9df2130-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-n-d5ef04779b\" (UID: \"715bae0514390aba840e0b9cb9df2130\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:58.016066 kubelet[3299]: I0114 00:03:58.016066 3299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/715bae0514390aba840e0b9cb9df2130-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-n-d5ef04779b\" (UID: \"715bae0514390aba840e0b9cb9df2130\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:58.016185 kubelet[3299]: I0114 00:03:58.016076 3299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/715bae0514390aba840e0b9cb9df2130-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-n-d5ef04779b\" (UID: \"715bae0514390aba840e0b9cb9df2130\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:58.016185 kubelet[3299]: I0114 00:03:58.016099 3299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9d63792101fea76fa98dee1a5c524903-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-n-d5ef04779b\" (UID: \"9d63792101fea76fa98dee1a5c524903\") " pod="kube-system/kube-scheduler-ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:58.016185 kubelet[3299]: I0114 00:03:58.016109 3299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0047a24ac6df07bed92220edb44efee1-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-n-d5ef04779b\" (UID: \"0047a24ac6df07bed92220edb44efee1\") " pod="kube-system/kube-apiserver-ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:58.191977 kubelet[3299]: I0114 00:03:58.191926 3299 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:58.192586 kubelet[3299]: E0114 00:03:58.192560 3299 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.29:6443/api/v1/nodes\": dial tcp 10.200.20.29:6443: connect: connection refused" node="ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:58.261974 containerd[2133]: time="2026-01-14T00:03:58.261922303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-n-d5ef04779b,Uid:0047a24ac6df07bed92220edb44efee1,Namespace:kube-system,Attempt:0,}" Jan 14 00:03:58.266883 containerd[2133]: time="2026-01-14T00:03:58.266730922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-n-d5ef04779b,Uid:715bae0514390aba840e0b9cb9df2130,Namespace:kube-system,Attempt:0,}" Jan 14 00:03:58.269714 containerd[2133]: time="2026-01-14T00:03:58.269638720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-n-d5ef04779b,Uid:9d63792101fea76fa98dee1a5c524903,Namespace:kube-system,Attempt:0,}" Jan 14 00:03:58.345148 containerd[2133]: time="2026-01-14T00:03:58.345094619Z" level=info msg="connecting to shim d4abf57bb00adf8b61bef537d098b2ff3b0b6126dd64400e2c871fd6dfaec3c2" address="unix:///run/containerd/s/3a424d9ac28e553722b8c511be2fcca0761415161d05eb6e4d54e6c47dc2869a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:03:58.345638 containerd[2133]: time="2026-01-14T00:03:58.345581886Z" level=info msg="connecting to shim e37c4b355897d7eddabce85d2dd639ce457ea24b767b2c11d6ed7c9dc2dcfbc7" address="unix:///run/containerd/s/5aa14cc5c9cce912c677eeb3e1b749e218130c3fd787046f07d185b3f3ba6fed" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:03:58.368183 containerd[2133]: time="2026-01-14T00:03:58.368129251Z" level=info msg="connecting to shim d780548a90cb353a80f5b38b1cb95319fa0e3c5a35ea47362c9af606c31ff39f" address="unix:///run/containerd/s/41f6182aab1c96cd05ed38f6e17429dcae9c720f1966fad05c1f0b3b7dc0d29e" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:03:58.381679 systemd[1]: Started cri-containerd-e37c4b355897d7eddabce85d2dd639ce457ea24b767b2c11d6ed7c9dc2dcfbc7.scope - libcontainer container e37c4b355897d7eddabce85d2dd639ce457ea24b767b2c11d6ed7c9dc2dcfbc7. Jan 14 00:03:58.385057 systemd[1]: Started cri-containerd-d4abf57bb00adf8b61bef537d098b2ff3b0b6126dd64400e2c871fd6dfaec3c2.scope - libcontainer container d4abf57bb00adf8b61bef537d098b2ff3b0b6126dd64400e2c871fd6dfaec3c2. Jan 14 00:03:58.398338 systemd[1]: Started cri-containerd-d780548a90cb353a80f5b38b1cb95319fa0e3c5a35ea47362c9af606c31ff39f.scope - libcontainer container d780548a90cb353a80f5b38b1cb95319fa0e3c5a35ea47362c9af606c31ff39f. Jan 14 00:03:58.400000 audit: BPF prog-id=107 op=LOAD Jan 14 00:03:58.401000 audit: BPF prog-id=108 op=LOAD Jan 14 00:03:58.401000 audit[3371]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3358 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434616266353762623030616466386236316265663533376430393862 Jan 14 00:03:58.401000 audit: BPF prog-id=108 op=UNLOAD Jan 14 00:03:58.401000 audit[3371]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434616266353762623030616466386236316265663533376430393862 Jan 14 00:03:58.401000 audit: BPF prog-id=109 op=LOAD Jan 14 00:03:58.401000 audit[3371]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3358 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434616266353762623030616466386236316265663533376430393862 Jan 14 00:03:58.401000 audit: BPF prog-id=110 op=LOAD Jan 14 00:03:58.401000 audit[3371]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3358 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434616266353762623030616466386236316265663533376430393862 Jan 14 00:03:58.402000 audit: BPF prog-id=110 op=UNLOAD Jan 14 00:03:58.402000 audit[3371]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434616266353762623030616466386236316265663533376430393862 Jan 14 00:03:58.402000 audit: BPF prog-id=109 op=UNLOAD Jan 14 00:03:58.402000 audit[3371]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434616266353762623030616466386236316265663533376430393862 Jan 14 00:03:58.402000 audit: BPF prog-id=111 op=LOAD Jan 14 00:03:58.402000 audit[3371]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3358 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434616266353762623030616466386236316265663533376430393862 Jan 14 00:03:58.404000 audit: BPF prog-id=112 op=LOAD Jan 14 00:03:58.405000 audit: BPF prog-id=113 op=LOAD Jan 14 00:03:58.405000 audit[3370]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3337 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533376334623335353839376437656464616263653835643264643633 Jan 14 00:03:58.405000 audit: BPF prog-id=113 op=UNLOAD Jan 14 00:03:58.405000 audit[3370]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533376334623335353839376437656464616263653835643264643633 Jan 14 00:03:58.406000 audit: BPF prog-id=114 op=LOAD Jan 14 00:03:58.406000 audit[3370]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3337 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533376334623335353839376437656464616263653835643264643633 Jan 14 00:03:58.406000 audit: BPF prog-id=115 op=LOAD Jan 14 00:03:58.406000 audit[3370]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3337 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533376334623335353839376437656464616263653835643264643633 Jan 14 00:03:58.406000 audit: BPF prog-id=115 op=UNLOAD Jan 14 00:03:58.406000 audit[3370]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533376334623335353839376437656464616263653835643264643633 Jan 14 00:03:58.406000 audit: BPF prog-id=114 op=UNLOAD Jan 14 00:03:58.406000 audit[3370]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533376334623335353839376437656464616263653835643264643633 Jan 14 00:03:58.407000 audit: BPF prog-id=116 op=LOAD Jan 14 00:03:58.407000 audit[3370]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3337 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533376334623335353839376437656464616263653835643264643633 Jan 14 00:03:58.413000 audit: BPF prog-id=117 op=LOAD Jan 14 00:03:58.415073 kubelet[3299]: E0114 00:03:58.414922 3299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-n-d5ef04779b?timeout=10s\": dial tcp 10.200.20.29:6443: connect: connection refused" interval="800ms" Jan 14 00:03:58.414000 audit: BPF prog-id=118 op=LOAD Jan 14 00:03:58.414000 audit[3412]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3390 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437383035343861393063623335336138306635623338623163623935 Jan 14 00:03:58.415000 audit: BPF prog-id=118 op=UNLOAD Jan 14 00:03:58.415000 audit[3412]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3390 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437383035343861393063623335336138306635623338623163623935 Jan 14 00:03:58.415000 audit: BPF prog-id=119 op=LOAD Jan 14 00:03:58.415000 audit[3412]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3390 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437383035343861393063623335336138306635623338623163623935 Jan 14 00:03:58.415000 audit: BPF prog-id=120 op=LOAD Jan 14 00:03:58.415000 audit[3412]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3390 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437383035343861393063623335336138306635623338623163623935 Jan 14 00:03:58.415000 audit: BPF prog-id=120 op=UNLOAD Jan 14 00:03:58.415000 audit[3412]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3390 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437383035343861393063623335336138306635623338623163623935 Jan 14 00:03:58.415000 audit: BPF prog-id=119 op=UNLOAD Jan 14 00:03:58.415000 audit[3412]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3390 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437383035343861393063623335336138306635623338623163623935 Jan 14 00:03:58.415000 audit: BPF prog-id=121 op=LOAD Jan 14 00:03:58.415000 audit[3412]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3390 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437383035343861393063623335336138306635623338623163623935 Jan 14 00:03:58.459590 containerd[2133]: time="2026-01-14T00:03:58.459542588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-n-d5ef04779b,Uid:0047a24ac6df07bed92220edb44efee1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4abf57bb00adf8b61bef537d098b2ff3b0b6126dd64400e2c871fd6dfaec3c2\"" Jan 14 00:03:58.465328 containerd[2133]: time="2026-01-14T00:03:58.465284106Z" level=info msg="CreateContainer within sandbox \"d4abf57bb00adf8b61bef537d098b2ff3b0b6126dd64400e2c871fd6dfaec3c2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 00:03:58.467889 containerd[2133]: time="2026-01-14T00:03:58.467855298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-n-d5ef04779b,Uid:715bae0514390aba840e0b9cb9df2130,Namespace:kube-system,Attempt:0,} returns sandbox id \"e37c4b355897d7eddabce85d2dd639ce457ea24b767b2c11d6ed7c9dc2dcfbc7\"" Jan 14 00:03:58.469790 containerd[2133]: time="2026-01-14T00:03:58.469756140Z" level=info msg="CreateContainer within sandbox \"e37c4b355897d7eddabce85d2dd639ce457ea24b767b2c11d6ed7c9dc2dcfbc7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 00:03:58.486431 containerd[2133]: time="2026-01-14T00:03:58.486388088Z" level=info msg="Container 7865e0a2bde85d207a19a4b0d67ab0a07e4f3ba185ff9256527f59fcf44af400: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:03:58.489290 containerd[2133]: time="2026-01-14T00:03:58.489259071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-n-d5ef04779b,Uid:9d63792101fea76fa98dee1a5c524903,Namespace:kube-system,Attempt:0,} returns sandbox id \"d780548a90cb353a80f5b38b1cb95319fa0e3c5a35ea47362c9af606c31ff39f\"" Jan 14 00:03:58.491727 containerd[2133]: time="2026-01-14T00:03:58.491708092Z" level=info msg="CreateContainer within sandbox \"d780548a90cb353a80f5b38b1cb95319fa0e3c5a35ea47362c9af606c31ff39f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 00:03:58.514961 containerd[2133]: time="2026-01-14T00:03:58.514923720Z" level=info msg="CreateContainer within sandbox \"d4abf57bb00adf8b61bef537d098b2ff3b0b6126dd64400e2c871fd6dfaec3c2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7865e0a2bde85d207a19a4b0d67ab0a07e4f3ba185ff9256527f59fcf44af400\"" Jan 14 00:03:58.515715 containerd[2133]: time="2026-01-14T00:03:58.515686809Z" level=info msg="StartContainer for \"7865e0a2bde85d207a19a4b0d67ab0a07e4f3ba185ff9256527f59fcf44af400\"" Jan 14 00:03:58.517059 containerd[2133]: time="2026-01-14T00:03:58.516956933Z" level=info msg="connecting to shim 7865e0a2bde85d207a19a4b0d67ab0a07e4f3ba185ff9256527f59fcf44af400" address="unix:///run/containerd/s/3a424d9ac28e553722b8c511be2fcca0761415161d05eb6e4d54e6c47dc2869a" protocol=ttrpc version=3 Jan 14 00:03:58.521022 containerd[2133]: time="2026-01-14T00:03:58.520952956Z" level=info msg="Container 5633c324e8d32304bbdf8d5b0fc618f3c3b0158e52b6a63c2a640c8acb5e279c: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:03:58.534363 systemd[1]: Started cri-containerd-7865e0a2bde85d207a19a4b0d67ab0a07e4f3ba185ff9256527f59fcf44af400.scope - libcontainer container 7865e0a2bde85d207a19a4b0d67ab0a07e4f3ba185ff9256527f59fcf44af400. Jan 14 00:03:58.538258 containerd[2133]: time="2026-01-14T00:03:58.537874471Z" level=info msg="Container 9be5a0e86d6e6e71c25adf28f49a133f4972af9e24d74b66137506aaea66c57d: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:03:58.546000 audit: BPF prog-id=122 op=LOAD Jan 14 00:03:58.546000 audit: BPF prog-id=123 op=LOAD Jan 14 00:03:58.546000 audit[3470]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3358 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738363565306132626465383564323037613139613462306436376162 Jan 14 00:03:58.546000 audit: BPF prog-id=123 op=UNLOAD Jan 14 00:03:58.546000 audit[3470]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738363565306132626465383564323037613139613462306436376162 Jan 14 00:03:58.546000 audit: BPF prog-id=124 op=LOAD Jan 14 00:03:58.546000 audit[3470]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3358 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738363565306132626465383564323037613139613462306436376162 Jan 14 00:03:58.546000 audit: BPF prog-id=125 op=LOAD Jan 14 00:03:58.546000 audit[3470]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3358 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738363565306132626465383564323037613139613462306436376162 Jan 14 00:03:58.546000 audit: BPF prog-id=125 op=UNLOAD Jan 14 00:03:58.546000 audit[3470]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738363565306132626465383564323037613139613462306436376162 Jan 14 00:03:58.546000 audit: BPF prog-id=124 op=UNLOAD Jan 14 00:03:58.546000 audit[3470]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738363565306132626465383564323037613139613462306436376162 Jan 14 00:03:58.546000 audit: BPF prog-id=126 op=LOAD Jan 14 00:03:58.546000 audit[3470]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3358 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738363565306132626465383564323037613139613462306436376162 Jan 14 00:03:58.551930 containerd[2133]: time="2026-01-14T00:03:58.551810864Z" level=info msg="CreateContainer within sandbox \"e37c4b355897d7eddabce85d2dd639ce457ea24b767b2c11d6ed7c9dc2dcfbc7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5633c324e8d32304bbdf8d5b0fc618f3c3b0158e52b6a63c2a640c8acb5e279c\"" Jan 14 00:03:58.553182 containerd[2133]: time="2026-01-14T00:03:58.552714428Z" level=info msg="StartContainer for \"5633c324e8d32304bbdf8d5b0fc618f3c3b0158e52b6a63c2a640c8acb5e279c\"" Jan 14 00:03:58.554089 containerd[2133]: time="2026-01-14T00:03:58.554066369Z" level=info msg="connecting to shim 5633c324e8d32304bbdf8d5b0fc618f3c3b0158e52b6a63c2a640c8acb5e279c" address="unix:///run/containerd/s/5aa14cc5c9cce912c677eeb3e1b749e218130c3fd787046f07d185b3f3ba6fed" protocol=ttrpc version=3 Jan 14 00:03:58.567154 containerd[2133]: time="2026-01-14T00:03:58.567105295Z" level=info msg="CreateContainer within sandbox \"d780548a90cb353a80f5b38b1cb95319fa0e3c5a35ea47362c9af606c31ff39f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9be5a0e86d6e6e71c25adf28f49a133f4972af9e24d74b66137506aaea66c57d\"" Jan 14 00:03:58.567723 containerd[2133]: time="2026-01-14T00:03:58.567699532Z" level=info msg="StartContainer for \"9be5a0e86d6e6e71c25adf28f49a133f4972af9e24d74b66137506aaea66c57d\"" Jan 14 00:03:58.571931 containerd[2133]: time="2026-01-14T00:03:58.571899824Z" level=info msg="connecting to shim 9be5a0e86d6e6e71c25adf28f49a133f4972af9e24d74b66137506aaea66c57d" address="unix:///run/containerd/s/41f6182aab1c96cd05ed38f6e17429dcae9c720f1966fad05c1f0b3b7dc0d29e" protocol=ttrpc version=3 Jan 14 00:03:58.574508 systemd[1]: Started cri-containerd-5633c324e8d32304bbdf8d5b0fc618f3c3b0158e52b6a63c2a640c8acb5e279c.scope - libcontainer container 5633c324e8d32304bbdf8d5b0fc618f3c3b0158e52b6a63c2a640c8acb5e279c. Jan 14 00:03:58.587911 containerd[2133]: time="2026-01-14T00:03:58.587876261Z" level=info msg="StartContainer for \"7865e0a2bde85d207a19a4b0d67ab0a07e4f3ba185ff9256527f59fcf44af400\" returns successfully" Jan 14 00:03:58.595966 kubelet[3299]: I0114 00:03:58.595897 3299 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:58.596327 systemd[1]: Started cri-containerd-9be5a0e86d6e6e71c25adf28f49a133f4972af9e24d74b66137506aaea66c57d.scope - libcontainer container 9be5a0e86d6e6e71c25adf28f49a133f4972af9e24d74b66137506aaea66c57d. Jan 14 00:03:58.598856 kubelet[3299]: E0114 00:03:58.598609 3299 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.29:6443/api/v1/nodes\": dial tcp 10.200.20.29:6443: connect: connection refused" node="ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:58.602000 audit: BPF prog-id=127 op=LOAD Jan 14 00:03:58.603000 audit: BPF prog-id=128 op=LOAD Jan 14 00:03:58.603000 audit[3489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3337 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536333363333234653864333233303462626466386435623066633631 Jan 14 00:03:58.603000 audit: BPF prog-id=128 op=UNLOAD Jan 14 00:03:58.603000 audit[3489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536333363333234653864333233303462626466386435623066633631 Jan 14 00:03:58.603000 audit: BPF prog-id=129 op=LOAD Jan 14 00:03:58.603000 audit[3489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3337 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536333363333234653864333233303462626466386435623066633631 Jan 14 00:03:58.603000 audit: BPF prog-id=130 op=LOAD Jan 14 00:03:58.603000 audit[3489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3337 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536333363333234653864333233303462626466386435623066633631 Jan 14 00:03:58.604000 audit: BPF prog-id=130 op=UNLOAD Jan 14 00:03:58.604000 audit[3489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536333363333234653864333233303462626466386435623066633631 Jan 14 00:03:58.604000 audit: BPF prog-id=129 op=UNLOAD Jan 14 00:03:58.604000 audit[3489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536333363333234653864333233303462626466386435623066633631 Jan 14 00:03:58.604000 audit: BPF prog-id=131 op=LOAD Jan 14 00:03:58.604000 audit[3489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3337 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536333363333234653864333233303462626466386435623066633631 Jan 14 00:03:58.618000 audit: BPF prog-id=132 op=LOAD Jan 14 00:03:58.619000 audit: BPF prog-id=133 op=LOAD Jan 14 00:03:58.619000 audit[3511]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=3390 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653561306538366436653665373163323561646632386634396131 Jan 14 00:03:58.619000 audit: BPF prog-id=133 op=UNLOAD Jan 14 00:03:58.619000 audit[3511]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3390 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653561306538366436653665373163323561646632386634396131 Jan 14 00:03:58.619000 audit: BPF prog-id=134 op=LOAD Jan 14 00:03:58.619000 audit[3511]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=3390 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653561306538366436653665373163323561646632386634396131 Jan 14 00:03:58.619000 audit: BPF prog-id=135 op=LOAD Jan 14 00:03:58.619000 audit[3511]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=3390 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653561306538366436653665373163323561646632386634396131 Jan 14 00:03:58.619000 audit: BPF prog-id=135 op=UNLOAD Jan 14 00:03:58.619000 audit[3511]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3390 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653561306538366436653665373163323561646632386634396131 Jan 14 00:03:58.620000 audit: BPF prog-id=134 op=UNLOAD Jan 14 00:03:58.620000 audit[3511]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3390 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653561306538366436653665373163323561646632386634396131 Jan 14 00:03:58.620000 audit: BPF prog-id=136 op=LOAD Jan 14 00:03:58.620000 audit[3511]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=3390 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:03:58.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653561306538366436653665373163323561646632386634396131 Jan 14 00:03:58.651337 containerd[2133]: time="2026-01-14T00:03:58.651283490Z" level=info msg="StartContainer for \"5633c324e8d32304bbdf8d5b0fc618f3c3b0158e52b6a63c2a640c8acb5e279c\" returns successfully" Jan 14 00:03:58.668036 containerd[2133]: time="2026-01-14T00:03:58.667990415Z" level=info msg="StartContainer for \"9be5a0e86d6e6e71c25adf28f49a133f4972af9e24d74b66137506aaea66c57d\" returns successfully" Jan 14 00:03:58.847318 kubelet[3299]: E0114 00:03:58.847223 3299 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-n-d5ef04779b\" not found" node="ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:58.850202 kubelet[3299]: E0114 00:03:58.848825 3299 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-n-d5ef04779b\" not found" node="ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:58.853917 kubelet[3299]: E0114 00:03:58.853894 3299 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-n-d5ef04779b\" not found" node="ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:59.403233 kubelet[3299]: I0114 00:03:59.403203 3299 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:59.495587 kubelet[3299]: E0114 00:03:59.495550 3299 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547.0.0-n-d5ef04779b\" not found" node="ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:59.578450 kubelet[3299]: I0114 00:03:59.578202 3299 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:59.613690 kubelet[3299]: I0114 00:03:59.613646 3299 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:59.667392 kubelet[3299]: E0114 00:03:59.667112 3299 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-n-d5ef04779b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:59.667392 kubelet[3299]: I0114 00:03:59.667145 3299 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:59.680397 kubelet[3299]: E0114 00:03:59.680358 3299 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.0.0-n-d5ef04779b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:59.680397 kubelet[3299]: I0114 00:03:59.680390 3299 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:59.684494 kubelet[3299]: E0114 00:03:59.684452 3299 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-n-d5ef04779b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:59.800252 kubelet[3299]: I0114 00:03:59.800226 3299 apiserver.go:52] "Watching apiserver" Jan 14 00:03:59.817277 kubelet[3299]: I0114 00:03:59.817235 3299 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 00:03:59.854768 kubelet[3299]: I0114 00:03:59.854737 3299 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:59.855094 kubelet[3299]: I0114 00:03:59.855075 3299 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:59.860499 kubelet[3299]: E0114 00:03:59.860470 3299 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-n-d5ef04779b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547.0.0-n-d5ef04779b" Jan 14 00:03:59.861223 kubelet[3299]: E0114 00:03:59.861201 3299 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-n-d5ef04779b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:01.406179 kubelet[3299]: I0114 00:04:01.405412 3299 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:01.412820 kubelet[3299]: W0114 00:04:01.412682 3299 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 00:04:01.819180 systemd[1]: Reload requested from client PID 3569 ('systemctl') (unit session-10.scope)... Jan 14 00:04:01.819197 systemd[1]: Reloading... Jan 14 00:04:01.896205 zram_generator::config[3615]: No configuration found. Jan 14 00:04:02.074605 systemd[1]: Reloading finished in 255 ms. Jan 14 00:04:02.103840 kubelet[3299]: I0114 00:04:02.103781 3299 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 00:04:02.104049 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:04:02.115244 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 00:04:02.115540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:04:02.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:04:02.119015 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 14 00:04:02.119121 kernel: audit: type=1131 audit(1768349042.114:413): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:04:02.122690 systemd[1]: kubelet.service: Consumed 678ms CPU time, 127.5M memory peak. Jan 14 00:04:02.125518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:04:02.133000 audit: BPF prog-id=137 op=LOAD Jan 14 00:04:02.140328 kernel: audit: type=1334 audit(1768349042.133:414): prog-id=137 op=LOAD Jan 14 00:04:02.139000 audit: BPF prog-id=104 op=UNLOAD Jan 14 00:04:02.143000 audit: BPF prog-id=138 op=LOAD Jan 14 00:04:02.148711 kernel: audit: type=1334 audit(1768349042.139:415): prog-id=104 op=UNLOAD Jan 14 00:04:02.148764 kernel: audit: type=1334 audit(1768349042.143:416): prog-id=138 op=LOAD Jan 14 00:04:02.147000 audit: BPF prog-id=139 op=LOAD Jan 14 00:04:02.154181 kernel: audit: type=1334 audit(1768349042.147:417): prog-id=139 op=LOAD Jan 14 00:04:02.147000 audit: BPF prog-id=105 op=UNLOAD Jan 14 00:04:02.158856 kernel: audit: type=1334 audit(1768349042.147:418): prog-id=105 op=UNLOAD Jan 14 00:04:02.159189 kernel: audit: type=1334 audit(1768349042.147:419): prog-id=106 op=UNLOAD Jan 14 00:04:02.147000 audit: BPF prog-id=106 op=UNLOAD Jan 14 00:04:02.148000 audit: BPF prog-id=140 op=LOAD Jan 14 00:04:02.168448 kernel: audit: type=1334 audit(1768349042.148:420): prog-id=140 op=LOAD Jan 14 00:04:02.148000 audit: BPF prog-id=90 op=UNLOAD Jan 14 00:04:02.172743 kernel: audit: type=1334 audit(1768349042.148:421): prog-id=90 op=UNLOAD Jan 14 00:04:02.153000 audit: BPF prog-id=141 op=LOAD Jan 14 00:04:02.177907 kernel: audit: type=1334 audit(1768349042.153:422): prog-id=141 op=LOAD Jan 14 00:04:02.153000 audit: BPF prog-id=87 op=UNLOAD Jan 14 00:04:02.163000 audit: BPF prog-id=142 op=LOAD Jan 14 00:04:02.163000 audit: BPF prog-id=143 op=LOAD Jan 14 00:04:02.163000 audit: BPF prog-id=88 op=UNLOAD Jan 14 00:04:02.163000 audit: BPF prog-id=89 op=UNLOAD Jan 14 00:04:02.167000 audit: BPF prog-id=144 op=LOAD Jan 14 00:04:02.167000 audit: BPF prog-id=98 op=UNLOAD Jan 14 00:04:02.171000 audit: BPF prog-id=145 op=LOAD Jan 14 00:04:02.171000 audit: BPF prog-id=146 op=LOAD Jan 14 00:04:02.171000 audit: BPF prog-id=99 op=UNLOAD Jan 14 00:04:02.171000 audit: BPF prog-id=100 op=UNLOAD Jan 14 00:04:02.176000 audit: BPF prog-id=147 op=LOAD Jan 14 00:04:02.177000 audit: BPF prog-id=92 op=UNLOAD Jan 14 00:04:02.177000 audit: BPF prog-id=148 op=LOAD Jan 14 00:04:02.177000 audit: BPF prog-id=149 op=LOAD Jan 14 00:04:02.177000 audit: BPF prog-id=93 op=UNLOAD Jan 14 00:04:02.177000 audit: BPF prog-id=94 op=UNLOAD Jan 14 00:04:02.178000 audit: BPF prog-id=150 op=LOAD Jan 14 00:04:02.178000 audit: BPF prog-id=95 op=UNLOAD Jan 14 00:04:02.178000 audit: BPF prog-id=151 op=LOAD Jan 14 00:04:02.178000 audit: BPF prog-id=152 op=LOAD Jan 14 00:04:02.178000 audit: BPF prog-id=96 op=UNLOAD Jan 14 00:04:02.178000 audit: BPF prog-id=97 op=UNLOAD Jan 14 00:04:02.179000 audit: BPF prog-id=153 op=LOAD Jan 14 00:04:02.179000 audit: BPF prog-id=91 op=UNLOAD Jan 14 00:04:02.180000 audit: BPF prog-id=154 op=LOAD Jan 14 00:04:02.180000 audit: BPF prog-id=101 op=UNLOAD Jan 14 00:04:02.180000 audit: BPF prog-id=155 op=LOAD Jan 14 00:04:02.180000 audit: BPF prog-id=156 op=LOAD Jan 14 00:04:02.180000 audit: BPF prog-id=102 op=UNLOAD Jan 14 00:04:02.180000 audit: BPF prog-id=103 op=UNLOAD Jan 14 00:04:02.281805 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:04:02.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:04:02.290938 (kubelet)[3683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 00:04:02.319388 kubelet[3683]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 00:04:02.319835 kubelet[3683]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 00:04:02.319900 kubelet[3683]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 00:04:02.320013 kubelet[3683]: I0114 00:04:02.319984 3683 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 00:04:02.324859 kubelet[3683]: I0114 00:04:02.324764 3683 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 14 00:04:02.324859 kubelet[3683]: I0114 00:04:02.324795 3683 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 00:04:02.325007 kubelet[3683]: I0114 00:04:02.324985 3683 server.go:954] "Client rotation is on, will bootstrap in background" Jan 14 00:04:02.326529 kubelet[3683]: I0114 00:04:02.326509 3683 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 14 00:04:02.328186 kubelet[3683]: I0114 00:04:02.328065 3683 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 00:04:02.332062 kubelet[3683]: I0114 00:04:02.332001 3683 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 00:04:02.335239 kubelet[3683]: I0114 00:04:02.334917 3683 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 00:04:02.335239 kubelet[3683]: I0114 00:04:02.335093 3683 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 00:04:02.335496 kubelet[3683]: I0114 00:04:02.335116 3683 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-n-d5ef04779b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 00:04:02.335608 kubelet[3683]: I0114 00:04:02.335597 3683 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 00:04:02.335653 kubelet[3683]: I0114 00:04:02.335647 3683 container_manager_linux.go:304] "Creating device plugin manager" Jan 14 00:04:02.335735 kubelet[3683]: I0114 00:04:02.335727 3683 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:04:02.335903 kubelet[3683]: I0114 00:04:02.335892 3683 kubelet.go:446] "Attempting to sync node with API server" Jan 14 00:04:02.335953 kubelet[3683]: I0114 00:04:02.335945 3683 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 00:04:02.336010 kubelet[3683]: I0114 00:04:02.336004 3683 kubelet.go:352] "Adding apiserver pod source" Jan 14 00:04:02.336062 kubelet[3683]: I0114 00:04:02.336054 3683 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 00:04:02.341297 kubelet[3683]: I0114 00:04:02.341267 3683 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 00:04:02.341928 kubelet[3683]: I0114 00:04:02.341904 3683 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 00:04:02.343058 kubelet[3683]: I0114 00:04:02.342944 3683 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 00:04:02.343116 kubelet[3683]: I0114 00:04:02.343066 3683 server.go:1287] "Started kubelet" Jan 14 00:04:02.346348 kubelet[3683]: I0114 00:04:02.346325 3683 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 00:04:02.353527 kubelet[3683]: I0114 00:04:02.353268 3683 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 00:04:02.354089 kubelet[3683]: I0114 00:04:02.354069 3683 server.go:479] "Adding debug handlers to kubelet server" Jan 14 00:04:02.354976 kubelet[3683]: I0114 00:04:02.354924 3683 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 00:04:02.362256 kubelet[3683]: I0114 00:04:02.362218 3683 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 00:04:02.364224 kubelet[3683]: E0114 00:04:02.363454 3683 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 00:04:02.364224 kubelet[3683]: I0114 00:04:02.363781 3683 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 00:04:02.364224 kubelet[3683]: I0114 00:04:02.363959 3683 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 00:04:02.374719 kubelet[3683]: I0114 00:04:02.374682 3683 factory.go:221] Registration of the systemd container factory successfully Jan 14 00:04:02.375265 kubelet[3683]: I0114 00:04:02.374947 3683 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 00:04:02.376620 kubelet[3683]: I0114 00:04:02.376495 3683 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 00:04:02.376878 kubelet[3683]: I0114 00:04:02.376864 3683 factory.go:221] Registration of the containerd container factory successfully Jan 14 00:04:02.379840 kubelet[3683]: I0114 00:04:02.379816 3683 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 00:04:02.379840 kubelet[3683]: I0114 00:04:02.379839 3683 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 14 00:04:02.379935 kubelet[3683]: I0114 00:04:02.379858 3683 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 00:04:02.379935 kubelet[3683]: I0114 00:04:02.379863 3683 kubelet.go:2382] "Starting kubelet main sync loop" Jan 14 00:04:02.379935 kubelet[3683]: E0114 00:04:02.379901 3683 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 00:04:02.422793 kubelet[3683]: I0114 00:04:02.422765 3683 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 00:04:02.422793 kubelet[3683]: I0114 00:04:02.422782 3683 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 00:04:02.422793 kubelet[3683]: I0114 00:04:02.422803 3683 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:04:02.480154 kubelet[3683]: E0114 00:04:02.480105 3683 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 00:04:02.668531 kubelet[3683]: I0114 00:04:02.668406 3683 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 00:04:02.668531 kubelet[3683]: I0114 00:04:02.668432 3683 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 00:04:02.668531 kubelet[3683]: I0114 00:04:02.668451 3683 policy_none.go:49] "None policy: Start" Jan 14 00:04:02.668531 kubelet[3683]: I0114 00:04:02.668468 3683 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 00:04:02.668531 kubelet[3683]: I0114 00:04:02.668467 3683 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 00:04:02.668531 kubelet[3683]: I0114 00:04:02.668479 3683 state_mem.go:35] "Initializing new in-memory state store" Jan 14 00:04:02.668757 kubelet[3683]: I0114 00:04:02.668590 3683 state_mem.go:75] "Updated machine memory state" Jan 14 00:04:02.671067 kubelet[3683]: I0114 00:04:02.670867 3683 reconciler.go:26] "Reconciler: start to sync state" Jan 14 00:04:02.673335 kubelet[3683]: I0114 00:04:02.673297 3683 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 00:04:02.674065 kubelet[3683]: I0114 00:04:02.674050 3683 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 00:04:02.674244 kubelet[3683]: I0114 00:04:02.674117 3683 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 00:04:02.675319 kubelet[3683]: I0114 00:04:02.674428 3683 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 00:04:02.676958 kubelet[3683]: E0114 00:04:02.676930 3683 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 00:04:02.680564 kubelet[3683]: I0114 00:04:02.680540 3683 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:02.682176 kubelet[3683]: I0114 00:04:02.682012 3683 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:02.682584 kubelet[3683]: I0114 00:04:02.682487 3683 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:02.687907 kubelet[3683]: W0114 00:04:02.687862 3683 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 00:04:02.692018 kubelet[3683]: W0114 00:04:02.691796 3683 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 00:04:02.693130 kubelet[3683]: W0114 00:04:02.693047 3683 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 00:04:02.693130 kubelet[3683]: E0114 00:04:02.693084 3683 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.0.0-n-d5ef04779b\" already exists" pod="kube-system/kube-controller-manager-ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:02.772092 kubelet[3683]: I0114 00:04:02.772035 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0047a24ac6df07bed92220edb44efee1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-n-d5ef04779b\" (UID: \"0047a24ac6df07bed92220edb44efee1\") " pod="kube-system/kube-apiserver-ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:02.772092 kubelet[3683]: I0114 00:04:02.772076 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/715bae0514390aba840e0b9cb9df2130-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-n-d5ef04779b\" (UID: \"715bae0514390aba840e0b9cb9df2130\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:02.772092 kubelet[3683]: I0114 00:04:02.772092 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/715bae0514390aba840e0b9cb9df2130-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-n-d5ef04779b\" (UID: \"715bae0514390aba840e0b9cb9df2130\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:02.772092 kubelet[3683]: I0114 00:04:02.772104 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9d63792101fea76fa98dee1a5c524903-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-n-d5ef04779b\" (UID: \"9d63792101fea76fa98dee1a5c524903\") " pod="kube-system/kube-scheduler-ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:02.772092 kubelet[3683]: I0114 00:04:02.772116 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0047a24ac6df07bed92220edb44efee1-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-n-d5ef04779b\" (UID: \"0047a24ac6df07bed92220edb44efee1\") " pod="kube-system/kube-apiserver-ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:02.772373 kubelet[3683]: I0114 00:04:02.772126 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0047a24ac6df07bed92220edb44efee1-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-n-d5ef04779b\" (UID: \"0047a24ac6df07bed92220edb44efee1\") " pod="kube-system/kube-apiserver-ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:02.772373 kubelet[3683]: I0114 00:04:02.772136 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/715bae0514390aba840e0b9cb9df2130-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-n-d5ef04779b\" (UID: \"715bae0514390aba840e0b9cb9df2130\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:02.772373 kubelet[3683]: I0114 00:04:02.772144 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/715bae0514390aba840e0b9cb9df2130-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-n-d5ef04779b\" (UID: \"715bae0514390aba840e0b9cb9df2130\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:02.772373 kubelet[3683]: I0114 00:04:02.772156 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/715bae0514390aba840e0b9cb9df2130-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-n-d5ef04779b\" (UID: \"715bae0514390aba840e0b9cb9df2130\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:02.783505 kubelet[3683]: I0114 00:04:02.783481 3683 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:02.796566 kubelet[3683]: I0114 00:04:02.796544 3683 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:02.796665 kubelet[3683]: I0114 00:04:02.796639 3683 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:03.337629 kubelet[3683]: I0114 00:04:03.337398 3683 apiserver.go:52] "Watching apiserver" Jan 14 00:04:03.365179 kubelet[3683]: I0114 00:04:03.364952 3683 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 00:04:03.410150 kubelet[3683]: I0114 00:04:03.410121 3683 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:03.411596 kubelet[3683]: I0114 00:04:03.411572 3683 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:03.432299 kubelet[3683]: W0114 00:04:03.432276 3683 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 00:04:03.432421 kubelet[3683]: E0114 00:04:03.432328 3683 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-n-d5ef04779b\" already exists" pod="kube-system/kube-apiserver-ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:03.432490 kubelet[3683]: W0114 00:04:03.432473 3683 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 00:04:03.432528 kubelet[3683]: E0114 00:04:03.432499 3683 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-n-d5ef04779b\" already exists" pod="kube-system/kube-scheduler-ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:03.453473 kubelet[3683]: I0114 00:04:03.453421 3683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547.0.0-n-d5ef04779b" podStartSLOduration=1.453406147 podStartE2EDuration="1.453406147s" podCreationTimestamp="2026-01-14 00:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:04:03.43866124 +0000 UTC m=+1.144172696" watchObservedRunningTime="2026-01-14 00:04:03.453406147 +0000 UTC m=+1.158917603" Jan 14 00:04:03.477433 kubelet[3683]: I0114 00:04:03.477382 3683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547.0.0-n-d5ef04779b" podStartSLOduration=1.477367512 podStartE2EDuration="1.477367512s" podCreationTimestamp="2026-01-14 00:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:04:03.453653736 +0000 UTC m=+1.159165200" watchObservedRunningTime="2026-01-14 00:04:03.477367512 +0000 UTC m=+1.182878976" Jan 14 00:04:03.488315 kubelet[3683]: I0114 00:04:03.488252 3683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547.0.0-n-d5ef04779b" podStartSLOduration=2.488231573 podStartE2EDuration="2.488231573s" podCreationTimestamp="2026-01-14 00:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:04:03.477597997 +0000 UTC m=+1.183109453" watchObservedRunningTime="2026-01-14 00:04:03.488231573 +0000 UTC m=+1.193743053" Jan 14 00:04:07.646723 kubelet[3683]: I0114 00:04:07.646689 3683 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 00:04:07.647209 kubelet[3683]: I0114 00:04:07.647129 3683 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 00:04:07.647245 containerd[2133]: time="2026-01-14T00:04:07.646971964Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 00:04:08.488121 systemd[1]: Created slice kubepods-besteffort-pod99162a01_4c86_40c4_b2fe_962b62da0f07.slice - libcontainer container kubepods-besteffort-pod99162a01_4c86_40c4_b2fe_962b62da0f07.slice. Jan 14 00:04:08.508580 kubelet[3683]: I0114 00:04:08.508531 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/99162a01-4c86-40c4-b2fe-962b62da0f07-kube-proxy\") pod \"kube-proxy-z4fqb\" (UID: \"99162a01-4c86-40c4-b2fe-962b62da0f07\") " pod="kube-system/kube-proxy-z4fqb" Jan 14 00:04:08.508848 kubelet[3683]: I0114 00:04:08.508764 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99162a01-4c86-40c4-b2fe-962b62da0f07-lib-modules\") pod \"kube-proxy-z4fqb\" (UID: \"99162a01-4c86-40c4-b2fe-962b62da0f07\") " pod="kube-system/kube-proxy-z4fqb" Jan 14 00:04:08.508848 kubelet[3683]: I0114 00:04:08.508793 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vknzq\" (UniqueName: \"kubernetes.io/projected/99162a01-4c86-40c4-b2fe-962b62da0f07-kube-api-access-vknzq\") pod \"kube-proxy-z4fqb\" (UID: \"99162a01-4c86-40c4-b2fe-962b62da0f07\") " pod="kube-system/kube-proxy-z4fqb" Jan 14 00:04:08.508848 kubelet[3683]: I0114 00:04:08.508820 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99162a01-4c86-40c4-b2fe-962b62da0f07-xtables-lock\") pod \"kube-proxy-z4fqb\" (UID: \"99162a01-4c86-40c4-b2fe-962b62da0f07\") " pod="kube-system/kube-proxy-z4fqb" Jan 14 00:04:08.801787 containerd[2133]: time="2026-01-14T00:04:08.801503602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z4fqb,Uid:99162a01-4c86-40c4-b2fe-962b62da0f07,Namespace:kube-system,Attempt:0,}" Jan 14 00:04:08.816460 systemd[1]: Created slice kubepods-besteffort-pod259bfda6_894e_4100_bdb1_f119ca3af034.slice - libcontainer container kubepods-besteffort-pod259bfda6_894e_4100_bdb1_f119ca3af034.slice. Jan 14 00:04:08.842004 containerd[2133]: time="2026-01-14T00:04:08.841955297Z" level=info msg="connecting to shim 1b8a3d14ed849c3d175c6d0c01a11023a7c66af6fc49c65ac9a5dfc8e9551b9c" address="unix:///run/containerd/s/f6207ef7a3d5a80e86a363b204f8cec4cbad27b4c800d4525debba213fff22a5" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:04:08.863377 systemd[1]: Started cri-containerd-1b8a3d14ed849c3d175c6d0c01a11023a7c66af6fc49c65ac9a5dfc8e9551b9c.scope - libcontainer container 1b8a3d14ed849c3d175c6d0c01a11023a7c66af6fc49c65ac9a5dfc8e9551b9c. Jan 14 00:04:08.869000 audit: BPF prog-id=157 op=LOAD Jan 14 00:04:08.874050 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 14 00:04:08.874133 kernel: audit: type=1334 audit(1768349048.869:455): prog-id=157 op=LOAD Jan 14 00:04:08.877000 audit: BPF prog-id=158 op=LOAD Jan 14 00:04:08.883283 kernel: audit: type=1334 audit(1768349048.877:456): prog-id=158 op=LOAD Jan 14 00:04:08.877000 audit[3751]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3739 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:08.900392 kernel: audit: type=1300 audit(1768349048.877:456): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3739 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:08.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162386133643134656438343963336431373563366430633031613131 Jan 14 00:04:08.917888 kernel: audit: type=1327 audit(1768349048.877:456): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162386133643134656438343963336431373563366430633031613131 Jan 14 00:04:08.877000 audit: BPF prog-id=158 op=UNLOAD Jan 14 00:04:08.919908 kubelet[3683]: I0114 00:04:08.919388 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/259bfda6-894e-4100-bdb1-f119ca3af034-var-lib-calico\") pod \"tigera-operator-7dcd859c48-8889r\" (UID: \"259bfda6-894e-4100-bdb1-f119ca3af034\") " pod="tigera-operator/tigera-operator-7dcd859c48-8889r" Jan 14 00:04:08.919908 kubelet[3683]: I0114 00:04:08.919442 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k984d\" (UniqueName: \"kubernetes.io/projected/259bfda6-894e-4100-bdb1-f119ca3af034-kube-api-access-k984d\") pod \"tigera-operator-7dcd859c48-8889r\" (UID: \"259bfda6-894e-4100-bdb1-f119ca3af034\") " pod="tigera-operator/tigera-operator-7dcd859c48-8889r" Jan 14 00:04:08.923660 kernel: audit: type=1334 audit(1768349048.877:457): prog-id=158 op=UNLOAD Jan 14 00:04:08.877000 audit[3751]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3739 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:08.942248 kernel: audit: type=1300 audit(1768349048.877:457): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3739 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:08.942350 kernel: audit: type=1327 audit(1768349048.877:457): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162386133643134656438343963336431373563366430633031613131 Jan 14 00:04:08.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162386133643134656438343963336431373563366430633031613131 Jan 14 00:04:08.877000 audit: BPF prog-id=159 op=LOAD Jan 14 00:04:08.963514 kernel: audit: type=1334 audit(1768349048.877:458): prog-id=159 op=LOAD Jan 14 00:04:08.877000 audit[3751]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3739 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:08.980786 kernel: audit: type=1300 audit(1768349048.877:458): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3739 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:08.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162386133643134656438343963336431373563366430633031613131 Jan 14 00:04:08.998403 kernel: audit: type=1327 audit(1768349048.877:458): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162386133643134656438343963336431373563366430633031613131 Jan 14 00:04:08.877000 audit: BPF prog-id=160 op=LOAD Jan 14 00:04:08.877000 audit[3751]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3739 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:08.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162386133643134656438343963336431373563366430633031613131 Jan 14 00:04:08.877000 audit: BPF prog-id=160 op=UNLOAD Jan 14 00:04:08.877000 audit[3751]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3739 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:08.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162386133643134656438343963336431373563366430633031613131 Jan 14 00:04:08.877000 audit: BPF prog-id=159 op=UNLOAD Jan 14 00:04:08.877000 audit[3751]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3739 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:08.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162386133643134656438343963336431373563366430633031613131 Jan 14 00:04:08.877000 audit: BPF prog-id=161 op=LOAD Jan 14 00:04:08.877000 audit[3751]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3739 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:08.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162386133643134656438343963336431373563366430633031613131 Jan 14 00:04:09.004464 containerd[2133]: time="2026-01-14T00:04:09.004411050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z4fqb,Uid:99162a01-4c86-40c4-b2fe-962b62da0f07,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b8a3d14ed849c3d175c6d0c01a11023a7c66af6fc49c65ac9a5dfc8e9551b9c\"" Jan 14 00:04:09.008346 containerd[2133]: time="2026-01-14T00:04:09.008306673Z" level=info msg="CreateContainer within sandbox \"1b8a3d14ed849c3d175c6d0c01a11023a7c66af6fc49c65ac9a5dfc8e9551b9c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 00:04:09.036714 containerd[2133]: time="2026-01-14T00:04:09.036668452Z" level=info msg="Container 07b1be486b2c53382a081daf94fa6b67a45320dca623802c596114281b1ee750: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:04:09.052870 containerd[2133]: time="2026-01-14T00:04:09.052694183Z" level=info msg="CreateContainer within sandbox \"1b8a3d14ed849c3d175c6d0c01a11023a7c66af6fc49c65ac9a5dfc8e9551b9c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"07b1be486b2c53382a081daf94fa6b67a45320dca623802c596114281b1ee750\"" Jan 14 00:04:09.055202 containerd[2133]: time="2026-01-14T00:04:09.054411983Z" level=info msg="StartContainer for \"07b1be486b2c53382a081daf94fa6b67a45320dca623802c596114281b1ee750\"" Jan 14 00:04:09.055870 containerd[2133]: time="2026-01-14T00:04:09.055844905Z" level=info msg="connecting to shim 07b1be486b2c53382a081daf94fa6b67a45320dca623802c596114281b1ee750" address="unix:///run/containerd/s/f6207ef7a3d5a80e86a363b204f8cec4cbad27b4c800d4525debba213fff22a5" protocol=ttrpc version=3 Jan 14 00:04:09.070327 systemd[1]: Started cri-containerd-07b1be486b2c53382a081daf94fa6b67a45320dca623802c596114281b1ee750.scope - libcontainer container 07b1be486b2c53382a081daf94fa6b67a45320dca623802c596114281b1ee750. Jan 14 00:04:09.117000 audit: BPF prog-id=162 op=LOAD Jan 14 00:04:09.117000 audit[3775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3739 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037623162653438366232633533333832613038316461663934666136 Jan 14 00:04:09.117000 audit: BPF prog-id=163 op=LOAD Jan 14 00:04:09.117000 audit[3775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3739 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037623162653438366232633533333832613038316461663934666136 Jan 14 00:04:09.117000 audit: BPF prog-id=163 op=UNLOAD Jan 14 00:04:09.117000 audit[3775]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3739 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037623162653438366232633533333832613038316461663934666136 Jan 14 00:04:09.117000 audit: BPF prog-id=162 op=UNLOAD Jan 14 00:04:09.117000 audit[3775]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3739 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037623162653438366232633533333832613038316461663934666136 Jan 14 00:04:09.117000 audit: BPF prog-id=164 op=LOAD Jan 14 00:04:09.117000 audit[3775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3739 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037623162653438366232633533333832613038316461663934666136 Jan 14 00:04:09.121267 containerd[2133]: time="2026-01-14T00:04:09.121234949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-8889r,Uid:259bfda6-894e-4100-bdb1-f119ca3af034,Namespace:tigera-operator,Attempt:0,}" Jan 14 00:04:09.139372 containerd[2133]: time="2026-01-14T00:04:09.139345398Z" level=info msg="StartContainer for \"07b1be486b2c53382a081daf94fa6b67a45320dca623802c596114281b1ee750\" returns successfully" Jan 14 00:04:09.167586 containerd[2133]: time="2026-01-14T00:04:09.167535423Z" level=info msg="connecting to shim 09a6aa0d1922e2569c1e61e3d81c2a72da32bc7af50db6ac8920786f3e9f33b5" address="unix:///run/containerd/s/ae47e3b396d50962bb837f63462f24e3231b0084acdf66f8abbf7bfaeb0e11f4" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:04:09.184436 systemd[1]: Started cri-containerd-09a6aa0d1922e2569c1e61e3d81c2a72da32bc7af50db6ac8920786f3e9f33b5.scope - libcontainer container 09a6aa0d1922e2569c1e61e3d81c2a72da32bc7af50db6ac8920786f3e9f33b5. Jan 14 00:04:09.192000 audit: BPF prog-id=165 op=LOAD Jan 14 00:04:09.193000 audit: BPF prog-id=166 op=LOAD Jan 14 00:04:09.193000 audit[3831]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=3818 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613661613064313932326532353639633165363165336438316332 Jan 14 00:04:09.193000 audit: BPF prog-id=166 op=UNLOAD Jan 14 00:04:09.193000 audit[3831]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3818 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613661613064313932326532353639633165363165336438316332 Jan 14 00:04:09.193000 audit: BPF prog-id=167 op=LOAD Jan 14 00:04:09.193000 audit[3831]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=3818 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613661613064313932326532353639633165363165336438316332 Jan 14 00:04:09.194000 audit: BPF prog-id=168 op=LOAD Jan 14 00:04:09.194000 audit[3831]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=3818 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613661613064313932326532353639633165363165336438316332 Jan 14 00:04:09.194000 audit: BPF prog-id=168 op=UNLOAD Jan 14 00:04:09.194000 audit[3831]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3818 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613661613064313932326532353639633165363165336438316332 Jan 14 00:04:09.194000 audit: BPF prog-id=167 op=UNLOAD Jan 14 00:04:09.194000 audit[3831]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3818 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613661613064313932326532353639633165363165336438316332 Jan 14 00:04:09.194000 audit: BPF prog-id=169 op=LOAD Jan 14 00:04:09.194000 audit[3831]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=3818 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613661613064313932326532353639633165363165336438316332 Jan 14 00:04:09.221864 containerd[2133]: time="2026-01-14T00:04:09.221741112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-8889r,Uid:259bfda6-894e-4100-bdb1-f119ca3af034,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"09a6aa0d1922e2569c1e61e3d81c2a72da32bc7af50db6ac8920786f3e9f33b5\"" Jan 14 00:04:09.223939 containerd[2133]: time="2026-01-14T00:04:09.223756668Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 00:04:09.239000 audit[3886]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3886 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.239000 audit[3886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff3780110 a2=0 a3=1 items=0 ppid=3789 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.239000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 00:04:09.242000 audit[3887]: NETFILTER_CFG table=mangle:58 family=2 entries=1 op=nft_register_chain pid=3887 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.242000 audit[3887]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd3daa140 a2=0 a3=1 items=0 ppid=3789 pid=3887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.242000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 00:04:09.243000 audit[3888]: NETFILTER_CFG table=nat:59 family=10 entries=1 op=nft_register_chain pid=3888 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.243000 audit[3888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe5988520 a2=0 a3=1 items=0 ppid=3789 pid=3888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.243000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 00:04:09.245000 audit[3889]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_chain pid=3889 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.245000 audit[3889]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1af7d70 a2=0 a3=1 items=0 ppid=3789 pid=3889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.245000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 00:04:09.246000 audit[3890]: NETFILTER_CFG table=filter:61 family=10 entries=1 op=nft_register_chain pid=3890 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.246000 audit[3890]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffc747d30 a2=0 a3=1 items=0 ppid=3789 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.246000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 00:04:09.247000 audit[3891]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_chain pid=3891 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.247000 audit[3891]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffcfde410 a2=0 a3=1 items=0 ppid=3789 pid=3891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.247000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 00:04:09.342000 audit[3892]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.342000 audit[3892]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc7bf9c90 a2=0 a3=1 items=0 ppid=3789 pid=3892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.342000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 00:04:09.345000 audit[3894]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3894 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.345000 audit[3894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd6d2ff70 a2=0 a3=1 items=0 ppid=3789 pid=3894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.345000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 00:04:09.348000 audit[3897]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=3897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.348000 audit[3897]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc35c1e70 a2=0 a3=1 items=0 ppid=3789 pid=3897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.348000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 00:04:09.349000 audit[3898]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=3898 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.349000 audit[3898]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffceb856a0 a2=0 a3=1 items=0 ppid=3789 pid=3898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 00:04:09.351000 audit[3900]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3900 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.351000 audit[3900]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc66d9af0 a2=0 a3=1 items=0 ppid=3789 pid=3900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.351000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 00:04:09.352000 audit[3901]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.352000 audit[3901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca8f99b0 a2=0 a3=1 items=0 ppid=3789 pid=3901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.352000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 00:04:09.354000 audit[3903]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.354000 audit[3903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff927f9a0 a2=0 a3=1 items=0 ppid=3789 pid=3903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.354000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 00:04:09.357000 audit[3906]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=3906 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.357000 audit[3906]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd3855a80 a2=0 a3=1 items=0 ppid=3789 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.357000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 00:04:09.358000 audit[3907]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=3907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.358000 audit[3907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca9b0420 a2=0 a3=1 items=0 ppid=3789 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.358000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 00:04:09.360000 audit[3909]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.360000 audit[3909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc12c81c0 a2=0 a3=1 items=0 ppid=3789 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.360000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 00:04:09.361000 audit[3910]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.361000 audit[3910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffa353ca0 a2=0 a3=1 items=0 ppid=3789 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.361000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 00:04:09.363000 audit[3912]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=3912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.363000 audit[3912]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdd240440 a2=0 a3=1 items=0 ppid=3789 pid=3912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.363000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 00:04:09.368000 audit[3915]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=3915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.368000 audit[3915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcbc8af80 a2=0 a3=1 items=0 ppid=3789 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.368000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 00:04:09.371000 audit[3918]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=3918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.371000 audit[3918]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff94afaa0 a2=0 a3=1 items=0 ppid=3789 pid=3918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.371000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 00:04:09.372000 audit[3919]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.372000 audit[3919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd4202ef0 a2=0 a3=1 items=0 ppid=3789 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.372000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 00:04:09.374000 audit[3921]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.374000 audit[3921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffdd5b46a0 a2=0 a3=1 items=0 ppid=3789 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.374000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 00:04:09.377000 audit[3924]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=3924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.377000 audit[3924]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc2f19c70 a2=0 a3=1 items=0 ppid=3789 pid=3924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.377000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 00:04:09.378000 audit[3925]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=3925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.378000 audit[3925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc6552610 a2=0 a3=1 items=0 ppid=3789 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.378000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 00:04:09.380000 audit[3927]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=3927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:04:09.380000 audit[3927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffd7163650 a2=0 a3=1 items=0 ppid=3789 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.380000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 00:04:09.459000 audit[3933]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=3933 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:09.459000 audit[3933]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffb10ba50 a2=0 a3=1 items=0 ppid=3789 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.459000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:09.465000 audit[3933]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=3933 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:09.465000 audit[3933]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffffb10ba50 a2=0 a3=1 items=0 ppid=3789 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.465000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:09.466000 audit[3938]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3938 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.466000 audit[3938]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc12bf840 a2=0 a3=1 items=0 ppid=3789 pid=3938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.466000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 00:04:09.469000 audit[3940]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=3940 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.469000 audit[3940]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffcb6c94b0 a2=0 a3=1 items=0 ppid=3789 pid=3940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.469000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 00:04:09.472000 audit[3943]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3943 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.472000 audit[3943]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe596db80 a2=0 a3=1 items=0 ppid=3789 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.472000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 00:04:09.473000 audit[3944]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3944 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.473000 audit[3944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff8dc7d90 a2=0 a3=1 items=0 ppid=3789 pid=3944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.473000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 00:04:09.476000 audit[3946]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3946 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.476000 audit[3946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc1e48e30 a2=0 a3=1 items=0 ppid=3789 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.476000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 00:04:09.477000 audit[3947]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.477000 audit[3947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd6757840 a2=0 a3=1 items=0 ppid=3789 pid=3947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.477000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 00:04:09.479000 audit[3949]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3949 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.479000 audit[3949]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc75bf080 a2=0 a3=1 items=0 ppid=3789 pid=3949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.479000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 00:04:09.482000 audit[3952]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=3952 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.482000 audit[3952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe8252bd0 a2=0 a3=1 items=0 ppid=3789 pid=3952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.482000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 00:04:09.483000 audit[3953]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=3953 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.483000 audit[3953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcd369090 a2=0 a3=1 items=0 ppid=3789 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.483000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 00:04:09.485000 audit[3955]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3955 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.485000 audit[3955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe22e55a0 a2=0 a3=1 items=0 ppid=3789 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.485000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 00:04:09.487000 audit[3956]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=3956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.487000 audit[3956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc933ab00 a2=0 a3=1 items=0 ppid=3789 pid=3956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.487000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 00:04:09.489000 audit[3958]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=3958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.489000 audit[3958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe2d86a60 a2=0 a3=1 items=0 ppid=3789 pid=3958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.489000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 00:04:09.493000 audit[3961]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=3961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.493000 audit[3961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe6937750 a2=0 a3=1 items=0 ppid=3789 pid=3961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.493000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 00:04:09.497000 audit[3964]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=3964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.497000 audit[3964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffffc86b80 a2=0 a3=1 items=0 ppid=3789 pid=3964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.497000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 00:04:09.499000 audit[3965]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3965 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.499000 audit[3965]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff9a853d0 a2=0 a3=1 items=0 ppid=3789 pid=3965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.499000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 00:04:09.501000 audit[3967]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=3967 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.501000 audit[3967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffd7937c10 a2=0 a3=1 items=0 ppid=3789 pid=3967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.501000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 00:04:09.504000 audit[3970]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=3970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.504000 audit[3970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc2c18800 a2=0 a3=1 items=0 ppid=3789 pid=3970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.504000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 00:04:09.505000 audit[3971]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=3971 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.505000 audit[3971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd52ccd40 a2=0 a3=1 items=0 ppid=3789 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.505000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 00:04:09.507000 audit[3973]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=3973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.507000 audit[3973]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffefae0cd0 a2=0 a3=1 items=0 ppid=3789 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.507000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 00:04:09.508000 audit[3974]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=3974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.508000 audit[3974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffda2e1710 a2=0 a3=1 items=0 ppid=3789 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.508000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 00:04:09.510000 audit[3976]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=3976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.510000 audit[3976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff47a8a90 a2=0 a3=1 items=0 ppid=3789 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.510000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 00:04:09.513000 audit[3979]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=3979 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:04:09.513000 audit[3979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe6b5e250 a2=0 a3=1 items=0 ppid=3789 pid=3979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.513000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 00:04:09.516000 audit[3981]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=3981 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 00:04:09.516000 audit[3981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffee8fecc0 a2=0 a3=1 items=0 ppid=3789 pid=3981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.516000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:09.517000 audit[3981]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=3981 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 00:04:09.517000 audit[3981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffee8fecc0 a2=0 a3=1 items=0 ppid=3789 pid=3981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:09.517000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:10.260181 kubelet[3683]: I0114 00:04:10.260102 3683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z4fqb" podStartSLOduration=2.260084157 podStartE2EDuration="2.260084157s" podCreationTimestamp="2026-01-14 00:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:04:09.436393757 +0000 UTC m=+7.141905301" watchObservedRunningTime="2026-01-14 00:04:10.260084157 +0000 UTC m=+7.965595613" Jan 14 00:04:12.362761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4015328585.mount: Deactivated successfully. Jan 14 00:04:13.358984 containerd[2133]: time="2026-01-14T00:04:13.358498449Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:04:13.361493 containerd[2133]: time="2026-01-14T00:04:13.361450637Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 14 00:04:13.364670 containerd[2133]: time="2026-01-14T00:04:13.364618088Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:04:13.369680 containerd[2133]: time="2026-01-14T00:04:13.369636653Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:04:13.374965 containerd[2133]: time="2026-01-14T00:04:13.373718380Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 4.149861022s" Jan 14 00:04:13.374965 containerd[2133]: time="2026-01-14T00:04:13.373766101Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 14 00:04:13.378470 containerd[2133]: time="2026-01-14T00:04:13.378439035Z" level=info msg="CreateContainer within sandbox \"09a6aa0d1922e2569c1e61e3d81c2a72da32bc7af50db6ac8920786f3e9f33b5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 00:04:13.439606 containerd[2133]: time="2026-01-14T00:04:13.439566200Z" level=info msg="Container 07fca76b7f841fdae0830677bb4c33e0f169bddd346017ded09aa8be69d07203: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:04:13.455042 containerd[2133]: time="2026-01-14T00:04:13.454992944Z" level=info msg="CreateContainer within sandbox \"09a6aa0d1922e2569c1e61e3d81c2a72da32bc7af50db6ac8920786f3e9f33b5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"07fca76b7f841fdae0830677bb4c33e0f169bddd346017ded09aa8be69d07203\"" Jan 14 00:04:13.455671 containerd[2133]: time="2026-01-14T00:04:13.455644927Z" level=info msg="StartContainer for \"07fca76b7f841fdae0830677bb4c33e0f169bddd346017ded09aa8be69d07203\"" Jan 14 00:04:13.457470 containerd[2133]: time="2026-01-14T00:04:13.457442209Z" level=info msg="connecting to shim 07fca76b7f841fdae0830677bb4c33e0f169bddd346017ded09aa8be69d07203" address="unix:///run/containerd/s/ae47e3b396d50962bb837f63462f24e3231b0084acdf66f8abbf7bfaeb0e11f4" protocol=ttrpc version=3 Jan 14 00:04:13.473333 systemd[1]: Started cri-containerd-07fca76b7f841fdae0830677bb4c33e0f169bddd346017ded09aa8be69d07203.scope - libcontainer container 07fca76b7f841fdae0830677bb4c33e0f169bddd346017ded09aa8be69d07203. Jan 14 00:04:13.480000 audit: BPF prog-id=170 op=LOAD Jan 14 00:04:13.481000 audit: BPF prog-id=171 op=LOAD Jan 14 00:04:13.481000 audit[3990]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3818 pid=3990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:13.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037666361373662376638343166646165303833303637376262346333 Jan 14 00:04:13.481000 audit: BPF prog-id=171 op=UNLOAD Jan 14 00:04:13.481000 audit[3990]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3818 pid=3990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:13.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037666361373662376638343166646165303833303637376262346333 Jan 14 00:04:13.481000 audit: BPF prog-id=172 op=LOAD Jan 14 00:04:13.481000 audit[3990]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3818 pid=3990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:13.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037666361373662376638343166646165303833303637376262346333 Jan 14 00:04:13.481000 audit: BPF prog-id=173 op=LOAD Jan 14 00:04:13.481000 audit[3990]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3818 pid=3990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:13.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037666361373662376638343166646165303833303637376262346333 Jan 14 00:04:13.481000 audit: BPF prog-id=173 op=UNLOAD Jan 14 00:04:13.481000 audit[3990]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3818 pid=3990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:13.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037666361373662376638343166646165303833303637376262346333 Jan 14 00:04:13.481000 audit: BPF prog-id=172 op=UNLOAD Jan 14 00:04:13.481000 audit[3990]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3818 pid=3990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:13.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037666361373662376638343166646165303833303637376262346333 Jan 14 00:04:13.481000 audit: BPF prog-id=174 op=LOAD Jan 14 00:04:13.481000 audit[3990]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3818 pid=3990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:13.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037666361373662376638343166646165303833303637376262346333 Jan 14 00:04:13.500673 containerd[2133]: time="2026-01-14T00:04:13.500573026Z" level=info msg="StartContainer for \"07fca76b7f841fdae0830677bb4c33e0f169bddd346017ded09aa8be69d07203\" returns successfully" Jan 14 00:04:18.766864 sudo[2666]: pam_unix(sudo:session): session closed for user root Jan 14 00:04:18.765000 audit[2666]: USER_END pid=2666 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:04:18.770315 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 14 00:04:18.770385 kernel: audit: type=1106 audit(1768349058.765:535): pid=2666 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:04:18.783000 audit[2666]: CRED_DISP pid=2666 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:04:18.800152 kernel: audit: type=1104 audit(1768349058.783:536): pid=2666 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:04:18.862245 sshd[2665]: Connection closed by 10.200.16.10 port 36330 Jan 14 00:04:18.863971 sshd-session[2661]: pam_unix(sshd:session): session closed for user core Jan 14 00:04:18.864000 audit[2661]: USER_END pid=2661 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:04:18.889610 systemd[1]: sshd@6-10.200.20.29:22-10.200.16.10:36330.service: Deactivated successfully. Jan 14 00:04:18.895697 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 00:04:18.895941 systemd[1]: session-10.scope: Consumed 3.010s CPU time, 219.7M memory peak. Jan 14 00:04:18.865000 audit[2661]: CRED_DISP pid=2661 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:04:18.911914 kernel: audit: type=1106 audit(1768349058.864:537): pid=2661 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:04:18.912027 kernel: audit: type=1104 audit(1768349058.865:538): pid=2661 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:04:18.912114 systemd-logind[2105]: Session 10 logged out. Waiting for processes to exit. Jan 14 00:04:18.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.29:22-10.200.16.10:36330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:04:18.928767 kernel: audit: type=1131 audit(1768349058.888:539): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.29:22-10.200.16.10:36330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:04:18.929094 systemd-logind[2105]: Removed session 10. Jan 14 00:04:20.131000 audit[4065]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4065 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:20.148208 kernel: audit: type=1325 audit(1768349060.131:540): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4065 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:20.131000 audit[4065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc4508610 a2=0 a3=1 items=0 ppid=3789 pid=4065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:20.131000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:20.184504 kernel: audit: type=1300 audit(1768349060.131:540): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc4508610 a2=0 a3=1 items=0 ppid=3789 pid=4065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:20.184591 kernel: audit: type=1327 audit(1768349060.131:540): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:20.178000 audit[4065]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4065 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:20.194920 kernel: audit: type=1325 audit(1768349060.178:541): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4065 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:20.178000 audit[4065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc4508610 a2=0 a3=1 items=0 ppid=3789 pid=4065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:20.178000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:20.216175 kernel: audit: type=1300 audit(1768349060.178:541): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc4508610 a2=0 a3=1 items=0 ppid=3789 pid=4065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:20.219000 audit[4067]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4067 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:20.219000 audit[4067]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffbbb3fe0 a2=0 a3=1 items=0 ppid=3789 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:20.219000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:20.222000 audit[4067]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4067 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:20.222000 audit[4067]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffbbb3fe0 a2=0 a3=1 items=0 ppid=3789 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:20.222000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:22.314000 audit[4069]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4069 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:22.314000 audit[4069]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc23bb600 a2=0 a3=1 items=0 ppid=3789 pid=4069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:22.314000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:22.318000 audit[4069]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4069 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:22.318000 audit[4069]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc23bb600 a2=0 a3=1 items=0 ppid=3789 pid=4069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:22.318000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:22.343000 audit[4071]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4071 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:22.343000 audit[4071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff073aa70 a2=0 a3=1 items=0 ppid=3789 pid=4071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:22.343000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:22.348000 audit[4071]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4071 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:22.348000 audit[4071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff073aa70 a2=0 a3=1 items=0 ppid=3789 pid=4071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:22.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:23.360000 audit[4074]: NETFILTER_CFG table=filter:116 family=2 entries=19 op=nft_register_rule pid=4074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:23.360000 audit[4074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc9667700 a2=0 a3=1 items=0 ppid=3789 pid=4074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:23.360000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:23.365000 audit[4074]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:23.365000 audit[4074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc9667700 a2=0 a3=1 items=0 ppid=3789 pid=4074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:23.365000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:24.667000 audit[4076]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4076 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:24.683857 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 14 00:04:24.683977 kernel: audit: type=1325 audit(1768349064.667:550): table=filter:118 family=2 entries=21 op=nft_register_rule pid=4076 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:24.667000 audit[4076]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd3249980 a2=0 a3=1 items=0 ppid=3789 pid=4076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:24.704308 kernel: audit: type=1300 audit(1768349064.667:550): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd3249980 a2=0 a3=1 items=0 ppid=3789 pid=4076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:24.667000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:24.714015 kernel: audit: type=1327 audit(1768349064.667:550): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:24.704000 audit[4076]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4076 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:24.723998 kernel: audit: type=1325 audit(1768349064.704:551): table=nat:119 family=2 entries=12 op=nft_register_rule pid=4076 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:24.724076 kubelet[3683]: I0114 00:04:24.717936 3683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-8889r" podStartSLOduration=12.564787318 podStartE2EDuration="16.717919932s" podCreationTimestamp="2026-01-14 00:04:08 +0000 UTC" firstStartedPulling="2026-01-14 00:04:09.222930029 +0000 UTC m=+6.928441485" lastFinishedPulling="2026-01-14 00:04:13.376062643 +0000 UTC m=+11.081574099" observedRunningTime="2026-01-14 00:04:14.444268284 +0000 UTC m=+12.149779740" watchObservedRunningTime="2026-01-14 00:04:24.717919932 +0000 UTC m=+22.423431396" Jan 14 00:04:24.704000 audit[4076]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd3249980 a2=0 a3=1 items=0 ppid=3789 pid=4076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:24.727947 systemd[1]: Created slice kubepods-besteffort-pod2a06447a_cce5_47c6_80c8_0083e792046e.slice - libcontainer container kubepods-besteffort-pod2a06447a_cce5_47c6_80c8_0083e792046e.slice. Jan 14 00:04:24.704000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:24.755552 kernel: audit: type=1300 audit(1768349064.704:551): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd3249980 a2=0 a3=1 items=0 ppid=3789 pid=4076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:24.756643 kernel: audit: type=1327 audit(1768349064.704:551): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:24.826428 kubelet[3683]: I0114 00:04:24.826374 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a06447a-cce5-47c6-80c8-0083e792046e-tigera-ca-bundle\") pod \"calico-typha-ff5767744-pblxp\" (UID: \"2a06447a-cce5-47c6-80c8-0083e792046e\") " pod="calico-system/calico-typha-ff5767744-pblxp" Jan 14 00:04:24.826428 kubelet[3683]: I0114 00:04:24.826427 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2a06447a-cce5-47c6-80c8-0083e792046e-typha-certs\") pod \"calico-typha-ff5767744-pblxp\" (UID: \"2a06447a-cce5-47c6-80c8-0083e792046e\") " pod="calico-system/calico-typha-ff5767744-pblxp" Jan 14 00:04:24.826428 kubelet[3683]: I0114 00:04:24.826442 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bkbb\" (UniqueName: \"kubernetes.io/projected/2a06447a-cce5-47c6-80c8-0083e792046e-kube-api-access-7bkbb\") pod \"calico-typha-ff5767744-pblxp\" (UID: \"2a06447a-cce5-47c6-80c8-0083e792046e\") " pod="calico-system/calico-typha-ff5767744-pblxp" Jan 14 00:04:24.930215 systemd[1]: Created slice kubepods-besteffort-pode84b79e7_44ef_4828_bbaa_4e1aa0ae6732.slice - libcontainer container kubepods-besteffort-pode84b79e7_44ef_4828_bbaa_4e1aa0ae6732.slice. Jan 14 00:04:25.027349 kubelet[3683]: I0114 00:04:25.027304 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e84b79e7-44ef-4828-bbaa-4e1aa0ae6732-var-lib-calico\") pod \"calico-node-9txwq\" (UID: \"e84b79e7-44ef-4828-bbaa-4e1aa0ae6732\") " pod="calico-system/calico-node-9txwq" Jan 14 00:04:25.027349 kubelet[3683]: I0114 00:04:25.027341 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e84b79e7-44ef-4828-bbaa-4e1aa0ae6732-cni-net-dir\") pod \"calico-node-9txwq\" (UID: \"e84b79e7-44ef-4828-bbaa-4e1aa0ae6732\") " pod="calico-system/calico-node-9txwq" Jan 14 00:04:25.027349 kubelet[3683]: I0114 00:04:25.027353 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e84b79e7-44ef-4828-bbaa-4e1aa0ae6732-policysync\") pod \"calico-node-9txwq\" (UID: \"e84b79e7-44ef-4828-bbaa-4e1aa0ae6732\") " pod="calico-system/calico-node-9txwq" Jan 14 00:04:25.027529 kubelet[3683]: I0114 00:04:25.027365 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk6hl\" (UniqueName: \"kubernetes.io/projected/e84b79e7-44ef-4828-bbaa-4e1aa0ae6732-kube-api-access-jk6hl\") pod \"calico-node-9txwq\" (UID: \"e84b79e7-44ef-4828-bbaa-4e1aa0ae6732\") " pod="calico-system/calico-node-9txwq" Jan 14 00:04:25.027529 kubelet[3683]: I0114 00:04:25.027377 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e84b79e7-44ef-4828-bbaa-4e1aa0ae6732-xtables-lock\") pod \"calico-node-9txwq\" (UID: \"e84b79e7-44ef-4828-bbaa-4e1aa0ae6732\") " pod="calico-system/calico-node-9txwq" Jan 14 00:04:25.027529 kubelet[3683]: I0114 00:04:25.027388 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e84b79e7-44ef-4828-bbaa-4e1aa0ae6732-cni-log-dir\") pod \"calico-node-9txwq\" (UID: \"e84b79e7-44ef-4828-bbaa-4e1aa0ae6732\") " pod="calico-system/calico-node-9txwq" Jan 14 00:04:25.027529 kubelet[3683]: I0114 00:04:25.027397 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e84b79e7-44ef-4828-bbaa-4e1aa0ae6732-node-certs\") pod \"calico-node-9txwq\" (UID: \"e84b79e7-44ef-4828-bbaa-4e1aa0ae6732\") " pod="calico-system/calico-node-9txwq" Jan 14 00:04:25.027529 kubelet[3683]: I0114 00:04:25.027408 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e84b79e7-44ef-4828-bbaa-4e1aa0ae6732-cni-bin-dir\") pod \"calico-node-9txwq\" (UID: \"e84b79e7-44ef-4828-bbaa-4e1aa0ae6732\") " pod="calico-system/calico-node-9txwq" Jan 14 00:04:25.027611 kubelet[3683]: I0114 00:04:25.027416 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e84b79e7-44ef-4828-bbaa-4e1aa0ae6732-lib-modules\") pod \"calico-node-9txwq\" (UID: \"e84b79e7-44ef-4828-bbaa-4e1aa0ae6732\") " pod="calico-system/calico-node-9txwq" Jan 14 00:04:25.027611 kubelet[3683]: I0114 00:04:25.027425 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e84b79e7-44ef-4828-bbaa-4e1aa0ae6732-tigera-ca-bundle\") pod \"calico-node-9txwq\" (UID: \"e84b79e7-44ef-4828-bbaa-4e1aa0ae6732\") " pod="calico-system/calico-node-9txwq" Jan 14 00:04:25.027611 kubelet[3683]: I0114 00:04:25.027445 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e84b79e7-44ef-4828-bbaa-4e1aa0ae6732-var-run-calico\") pod \"calico-node-9txwq\" (UID: \"e84b79e7-44ef-4828-bbaa-4e1aa0ae6732\") " pod="calico-system/calico-node-9txwq" Jan 14 00:04:25.027611 kubelet[3683]: I0114 00:04:25.027457 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e84b79e7-44ef-4828-bbaa-4e1aa0ae6732-flexvol-driver-host\") pod \"calico-node-9txwq\" (UID: \"e84b79e7-44ef-4828-bbaa-4e1aa0ae6732\") " pod="calico-system/calico-node-9txwq" Jan 14 00:04:25.057265 containerd[2133]: time="2026-01-14T00:04:25.057214452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-ff5767744-pblxp,Uid:2a06447a-cce5-47c6-80c8-0083e792046e,Namespace:calico-system,Attempt:0,}" Jan 14 00:04:25.106142 containerd[2133]: time="2026-01-14T00:04:25.106092895Z" level=info msg="connecting to shim 7d6fd1fbe9a20cc7eaab5e3523f5d0f0e7a27cd4225fe07118ad99bf8d0a0eec" address="unix:///run/containerd/s/3d97019d05413f35e3f37701df4f536dc798134412ca66ab8dd0b11f9c202ecc" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:04:25.143359 kubelet[3683]: E0114 00:04:25.143288 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgz55" podUID="f0601279-098f-420b-84a8-b4028d2c0ea2" Jan 14 00:04:25.145468 kubelet[3683]: E0114 00:04:25.145374 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.145468 kubelet[3683]: W0114 00:04:25.145408 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.145468 kubelet[3683]: E0114 00:04:25.145433 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.159623 systemd[1]: Started cri-containerd-7d6fd1fbe9a20cc7eaab5e3523f5d0f0e7a27cd4225fe07118ad99bf8d0a0eec.scope - libcontainer container 7d6fd1fbe9a20cc7eaab5e3523f5d0f0e7a27cd4225fe07118ad99bf8d0a0eec. Jan 14 00:04:25.161908 kubelet[3683]: E0114 00:04:25.161882 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.162052 kubelet[3683]: W0114 00:04:25.162037 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.162888 kubelet[3683]: E0114 00:04:25.162876 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.175000 audit: BPF prog-id=175 op=LOAD Jan 14 00:04:25.182186 kernel: audit: type=1334 audit(1768349065.175:552): prog-id=175 op=LOAD Jan 14 00:04:25.182000 audit: BPF prog-id=176 op=LOAD Jan 14 00:04:25.182000 audit[4100]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4088 pid=4100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:25.206878 kernel: audit: type=1334 audit(1768349065.182:553): prog-id=176 op=LOAD Jan 14 00:04:25.207010 kernel: audit: type=1300 audit(1768349065.182:553): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4088 pid=4100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:25.209311 kubelet[3683]: E0114 00:04:25.209275 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.209311 kubelet[3683]: W0114 00:04:25.209305 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.209428 kubelet[3683]: E0114 00:04:25.209329 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764366664316662653961323063633765616162356533353233663564 Jan 14 00:04:25.226418 kernel: audit: type=1327 audit(1768349065.182:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764366664316662653961323063633765616162356533353233663564 Jan 14 00:04:25.227838 kubelet[3683]: E0114 00:04:25.227645 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.227838 kubelet[3683]: W0114 00:04:25.227671 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.227838 kubelet[3683]: E0114 00:04:25.227722 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.228127 kubelet[3683]: E0114 00:04:25.227977 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.228127 kubelet[3683]: W0114 00:04:25.227993 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.228127 kubelet[3683]: E0114 00:04:25.228001 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.228127 kubelet[3683]: E0114 00:04:25.228118 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.228127 kubelet[3683]: W0114 00:04:25.228123 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.228127 kubelet[3683]: E0114 00:04:25.228129 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.182000 audit: BPF prog-id=176 op=UNLOAD Jan 14 00:04:25.182000 audit[4100]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4088 pid=4100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:25.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764366664316662653961323063633765616162356533353233663564 Jan 14 00:04:25.188000 audit: BPF prog-id=177 op=LOAD Jan 14 00:04:25.188000 audit[4100]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4088 pid=4100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:25.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764366664316662653961323063633765616162356533353233663564 Jan 14 00:04:25.188000 audit: BPF prog-id=178 op=LOAD Jan 14 00:04:25.188000 audit[4100]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4088 pid=4100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:25.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764366664316662653961323063633765616162356533353233663564 Jan 14 00:04:25.188000 audit: BPF prog-id=178 op=UNLOAD Jan 14 00:04:25.188000 audit[4100]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4088 pid=4100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:25.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764366664316662653961323063633765616162356533353233663564 Jan 14 00:04:25.188000 audit: BPF prog-id=177 op=UNLOAD Jan 14 00:04:25.188000 audit[4100]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4088 pid=4100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:25.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764366664316662653961323063633765616162356533353233663564 Jan 14 00:04:25.188000 audit: BPF prog-id=179 op=LOAD Jan 14 00:04:25.188000 audit[4100]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4088 pid=4100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:25.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764366664316662653961323063633765616162356533353233663564 Jan 14 00:04:25.229335 kubelet[3683]: E0114 00:04:25.229264 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.229335 kubelet[3683]: W0114 00:04:25.229278 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.229335 kubelet[3683]: E0114 00:04:25.229303 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.229964 kubelet[3683]: E0114 00:04:25.229944 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.229964 kubelet[3683]: W0114 00:04:25.229960 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.230025 kubelet[3683]: E0114 00:04:25.229971 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.230286 kubelet[3683]: E0114 00:04:25.230177 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.230286 kubelet[3683]: W0114 00:04:25.230218 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.230286 kubelet[3683]: E0114 00:04:25.230229 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.231142 kubelet[3683]: E0114 00:04:25.231118 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.231142 kubelet[3683]: W0114 00:04:25.231141 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.231599 kubelet[3683]: E0114 00:04:25.231152 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.231599 kubelet[3683]: E0114 00:04:25.231355 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.231599 kubelet[3683]: W0114 00:04:25.231362 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.231599 kubelet[3683]: E0114 00:04:25.231369 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.232024 kubelet[3683]: E0114 00:04:25.232002 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.232079 kubelet[3683]: W0114 00:04:25.232036 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.232079 kubelet[3683]: E0114 00:04:25.232050 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.232250 kubelet[3683]: E0114 00:04:25.232232 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.232250 kubelet[3683]: W0114 00:04:25.232244 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.232250 kubelet[3683]: E0114 00:04:25.232252 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.232853 kubelet[3683]: E0114 00:04:25.232386 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.232853 kubelet[3683]: W0114 00:04:25.232392 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.232853 kubelet[3683]: E0114 00:04:25.232399 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.232853 kubelet[3683]: E0114 00:04:25.232545 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.232853 kubelet[3683]: W0114 00:04:25.232551 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.232853 kubelet[3683]: E0114 00:04:25.232562 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.232853 kubelet[3683]: E0114 00:04:25.232678 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.232853 kubelet[3683]: W0114 00:04:25.232684 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.232853 kubelet[3683]: E0114 00:04:25.232692 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.232853 kubelet[3683]: E0114 00:04:25.232810 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.233037 kubelet[3683]: W0114 00:04:25.232815 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.233037 kubelet[3683]: E0114 00:04:25.232821 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.233037 kubelet[3683]: E0114 00:04:25.232927 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.233037 kubelet[3683]: W0114 00:04:25.232932 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.233037 kubelet[3683]: E0114 00:04:25.232938 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.233634 kubelet[3683]: E0114 00:04:25.233084 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.233634 kubelet[3683]: W0114 00:04:25.233091 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.233634 kubelet[3683]: E0114 00:04:25.233097 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.233634 kubelet[3683]: E0114 00:04:25.233222 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.233634 kubelet[3683]: W0114 00:04:25.233229 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.233634 kubelet[3683]: E0114 00:04:25.233235 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.233634 kubelet[3683]: E0114 00:04:25.233551 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.233634 kubelet[3683]: W0114 00:04:25.233561 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.233634 kubelet[3683]: E0114 00:04:25.233570 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.234141 kubelet[3683]: E0114 00:04:25.233695 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.234141 kubelet[3683]: W0114 00:04:25.233701 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.234141 kubelet[3683]: E0114 00:04:25.233707 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.235218 kubelet[3683]: E0114 00:04:25.235193 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.235218 kubelet[3683]: W0114 00:04:25.235211 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.235218 kubelet[3683]: E0114 00:04:25.235222 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.235648 kubelet[3683]: I0114 00:04:25.235248 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0601279-098f-420b-84a8-b4028d2c0ea2-kubelet-dir\") pod \"csi-node-driver-hgz55\" (UID: \"f0601279-098f-420b-84a8-b4028d2c0ea2\") " pod="calico-system/csi-node-driver-hgz55" Jan 14 00:04:25.235897 kubelet[3683]: E0114 00:04:25.235805 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.235897 kubelet[3683]: W0114 00:04:25.235825 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.236246 kubelet[3683]: E0114 00:04:25.236228 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.236299 kubelet[3683]: I0114 00:04:25.236255 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f0601279-098f-420b-84a8-b4028d2c0ea2-varrun\") pod \"csi-node-driver-hgz55\" (UID: \"f0601279-098f-420b-84a8-b4028d2c0ea2\") " pod="calico-system/csi-node-driver-hgz55" Jan 14 00:04:25.236440 kubelet[3683]: E0114 00:04:25.236412 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.236440 kubelet[3683]: W0114 00:04:25.236424 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.236440 kubelet[3683]: E0114 00:04:25.236432 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.236874 kubelet[3683]: I0114 00:04:25.236444 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhqbz\" (UniqueName: \"kubernetes.io/projected/f0601279-098f-420b-84a8-b4028d2c0ea2-kube-api-access-hhqbz\") pod \"csi-node-driver-hgz55\" (UID: \"f0601279-098f-420b-84a8-b4028d2c0ea2\") " pod="calico-system/csi-node-driver-hgz55" Jan 14 00:04:25.236874 kubelet[3683]: E0114 00:04:25.236562 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.236874 kubelet[3683]: W0114 00:04:25.236568 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.236874 kubelet[3683]: E0114 00:04:25.236574 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.236874 kubelet[3683]: I0114 00:04:25.236583 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f0601279-098f-420b-84a8-b4028d2c0ea2-socket-dir\") pod \"csi-node-driver-hgz55\" (UID: \"f0601279-098f-420b-84a8-b4028d2c0ea2\") " pod="calico-system/csi-node-driver-hgz55" Jan 14 00:04:25.236874 kubelet[3683]: E0114 00:04:25.236701 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.236874 kubelet[3683]: W0114 00:04:25.236707 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.236874 kubelet[3683]: E0114 00:04:25.236713 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.237000 containerd[2133]: time="2026-01-14T00:04:25.236603635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9txwq,Uid:e84b79e7-44ef-4828-bbaa-4e1aa0ae6732,Namespace:calico-system,Attempt:0,}" Jan 14 00:04:25.237022 kubelet[3683]: I0114 00:04:25.236722 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f0601279-098f-420b-84a8-b4028d2c0ea2-registration-dir\") pod \"csi-node-driver-hgz55\" (UID: \"f0601279-098f-420b-84a8-b4028d2c0ea2\") " pod="calico-system/csi-node-driver-hgz55" Jan 14 00:04:25.237022 kubelet[3683]: E0114 00:04:25.236854 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.237022 kubelet[3683]: W0114 00:04:25.236859 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.237022 kubelet[3683]: E0114 00:04:25.236871 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.237022 kubelet[3683]: E0114 00:04:25.236973 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.237022 kubelet[3683]: W0114 00:04:25.236977 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.237022 kubelet[3683]: E0114 00:04:25.236986 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.237122 kubelet[3683]: E0114 00:04:25.237090 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.237122 kubelet[3683]: W0114 00:04:25.237094 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.237122 kubelet[3683]: E0114 00:04:25.237104 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.237910 kubelet[3683]: E0114 00:04:25.237886 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.237910 kubelet[3683]: W0114 00:04:25.237904 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.237910 kubelet[3683]: E0114 00:04:25.237922 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.238083 kubelet[3683]: E0114 00:04:25.238067 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.238083 kubelet[3683]: W0114 00:04:25.238078 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.238083 kubelet[3683]: E0114 00:04:25.238091 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.238234 kubelet[3683]: E0114 00:04:25.238221 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.238234 kubelet[3683]: W0114 00:04:25.238230 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.238312 kubelet[3683]: E0114 00:04:25.238299 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.238363 kubelet[3683]: E0114 00:04:25.238349 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.238363 kubelet[3683]: W0114 00:04:25.238355 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.238435 kubelet[3683]: E0114 00:04:25.238422 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.238461 kubelet[3683]: E0114 00:04:25.238458 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.238509 kubelet[3683]: W0114 00:04:25.238461 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.238509 kubelet[3683]: E0114 00:04:25.238467 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.238917 kubelet[3683]: E0114 00:04:25.238896 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.238917 kubelet[3683]: W0114 00:04:25.238913 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.239078 kubelet[3683]: E0114 00:04:25.238924 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.239112 kubelet[3683]: E0114 00:04:25.239080 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.239112 kubelet[3683]: W0114 00:04:25.239086 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.239112 kubelet[3683]: E0114 00:04:25.239093 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.287840 containerd[2133]: time="2026-01-14T00:04:25.287675026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-ff5767744-pblxp,Uid:2a06447a-cce5-47c6-80c8-0083e792046e,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d6fd1fbe9a20cc7eaab5e3523f5d0f0e7a27cd4225fe07118ad99bf8d0a0eec\"" Jan 14 00:04:25.291847 containerd[2133]: time="2026-01-14T00:04:25.291143318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 00:04:25.296146 containerd[2133]: time="2026-01-14T00:04:25.294976170Z" level=info msg="connecting to shim 0113002abbb724f5b783ea9789789ec5d8a728ebcc9b05ecf42478d91f833a08" address="unix:///run/containerd/s/27a0e9155a050f8eef4bdf357f56396add2424617b2056b03f10ad59c60e84fd" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:04:25.320406 systemd[1]: Started cri-containerd-0113002abbb724f5b783ea9789789ec5d8a728ebcc9b05ecf42478d91f833a08.scope - libcontainer container 0113002abbb724f5b783ea9789789ec5d8a728ebcc9b05ecf42478d91f833a08. Jan 14 00:04:25.329000 audit: BPF prog-id=180 op=LOAD Jan 14 00:04:25.329000 audit: BPF prog-id=181 op=LOAD Jan 14 00:04:25.329000 audit[4194]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4181 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:25.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031313330303261626262373234663562373833656139373839373839 Jan 14 00:04:25.330000 audit: BPF prog-id=181 op=UNLOAD Jan 14 00:04:25.330000 audit[4194]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4181 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:25.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031313330303261626262373234663562373833656139373839373839 Jan 14 00:04:25.330000 audit: BPF prog-id=182 op=LOAD Jan 14 00:04:25.330000 audit[4194]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4181 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:25.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031313330303261626262373234663562373833656139373839373839 Jan 14 00:04:25.330000 audit: BPF prog-id=183 op=LOAD Jan 14 00:04:25.330000 audit[4194]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4181 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:25.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031313330303261626262373234663562373833656139373839373839 Jan 14 00:04:25.330000 audit: BPF prog-id=183 op=UNLOAD Jan 14 00:04:25.330000 audit[4194]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4181 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:25.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031313330303261626262373234663562373833656139373839373839 Jan 14 00:04:25.330000 audit: BPF prog-id=182 op=UNLOAD Jan 14 00:04:25.330000 audit[4194]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4181 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:25.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031313330303261626262373234663562373833656139373839373839 Jan 14 00:04:25.330000 audit: BPF prog-id=184 op=LOAD Jan 14 00:04:25.330000 audit[4194]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4181 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:25.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031313330303261626262373234663562373833656139373839373839 Jan 14 00:04:25.338287 kubelet[3683]: E0114 00:04:25.338239 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.338287 kubelet[3683]: W0114 00:04:25.338265 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.338287 kubelet[3683]: E0114 00:04:25.338290 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.339427 kubelet[3683]: E0114 00:04:25.339402 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.339427 kubelet[3683]: W0114 00:04:25.339419 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.339644 kubelet[3683]: E0114 00:04:25.339442 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.339860 kubelet[3683]: E0114 00:04:25.339843 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.339860 kubelet[3683]: W0114 00:04:25.339858 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.340030 kubelet[3683]: E0114 00:04:25.339874 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.340369 kubelet[3683]: E0114 00:04:25.340318 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.340369 kubelet[3683]: W0114 00:04:25.340332 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.340369 kubelet[3683]: E0114 00:04:25.340348 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.340714 kubelet[3683]: E0114 00:04:25.340621 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.340714 kubelet[3683]: W0114 00:04:25.340640 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.340714 kubelet[3683]: E0114 00:04:25.340656 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.341070 kubelet[3683]: E0114 00:04:25.341048 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.341070 kubelet[3683]: W0114 00:04:25.341067 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.341227 kubelet[3683]: E0114 00:04:25.341082 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.341458 kubelet[3683]: E0114 00:04:25.341441 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.341458 kubelet[3683]: W0114 00:04:25.341454 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.341740 kubelet[3683]: E0114 00:04:25.341713 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.342412 kubelet[3683]: E0114 00:04:25.342387 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.342412 kubelet[3683]: W0114 00:04:25.342407 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.342548 kubelet[3683]: E0114 00:04:25.342493 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.342686 kubelet[3683]: E0114 00:04:25.342669 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.342686 kubelet[3683]: W0114 00:04:25.342682 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.342810 kubelet[3683]: E0114 00:04:25.342792 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.343278 kubelet[3683]: E0114 00:04:25.343254 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.343278 kubelet[3683]: W0114 00:04:25.343275 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.343349 kubelet[3683]: E0114 00:04:25.343333 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.343455 kubelet[3683]: E0114 00:04:25.343444 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.343455 kubelet[3683]: W0114 00:04:25.343453 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.344324 kubelet[3683]: E0114 00:04:25.344283 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.344608 kubelet[3683]: E0114 00:04:25.344580 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.344608 kubelet[3683]: W0114 00:04:25.344593 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.344890 kubelet[3683]: E0114 00:04:25.344873 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.345130 kubelet[3683]: E0114 00:04:25.345066 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.345130 kubelet[3683]: W0114 00:04:25.345091 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.345268 kubelet[3683]: E0114 00:04:25.345253 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.345523 kubelet[3683]: E0114 00:04:25.345476 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.345523 kubelet[3683]: W0114 00:04:25.345488 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.345682 kubelet[3683]: E0114 00:04:25.345613 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.345871 kubelet[3683]: E0114 00:04:25.345847 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.345988 kubelet[3683]: W0114 00:04:25.345858 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.346106 kubelet[3683]: E0114 00:04:25.346026 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.346340 kubelet[3683]: E0114 00:04:25.346323 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.347248 kubelet[3683]: W0114 00:04:25.346410 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.347248 kubelet[3683]: E0114 00:04:25.346927 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.347461 kubelet[3683]: E0114 00:04:25.347373 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.347461 kubelet[3683]: W0114 00:04:25.347387 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.347607 kubelet[3683]: E0114 00:04:25.347559 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.347677 kubelet[3683]: E0114 00:04:25.347655 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.347677 kubelet[3683]: W0114 00:04:25.347664 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.347780 kubelet[3683]: E0114 00:04:25.347741 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.347973 kubelet[3683]: E0114 00:04:25.347962 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.348065 kubelet[3683]: W0114 00:04:25.348054 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.348292 kubelet[3683]: E0114 00:04:25.348280 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.348626 kubelet[3683]: E0114 00:04:25.348560 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.348626 kubelet[3683]: W0114 00:04:25.348574 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.348782 kubelet[3683]: E0114 00:04:25.348731 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.349231 kubelet[3683]: E0114 00:04:25.349137 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.349231 kubelet[3683]: W0114 00:04:25.349151 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.349396 kubelet[3683]: E0114 00:04:25.349288 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.349838 kubelet[3683]: E0114 00:04:25.349811 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.349838 kubelet[3683]: W0114 00:04:25.349824 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.350327 kubelet[3683]: E0114 00:04:25.350089 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.350786 kubelet[3683]: E0114 00:04:25.350769 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.351081 kubelet[3683]: W0114 00:04:25.351013 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.351566 kubelet[3683]: E0114 00:04:25.351279 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.351926 kubelet[3683]: E0114 00:04:25.351912 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.352033 kubelet[3683]: W0114 00:04:25.352008 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.352117 kubelet[3683]: E0114 00:04:25.352107 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.352421 kubelet[3683]: E0114 00:04:25.352381 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.352421 kubelet[3683]: W0114 00:04:25.352393 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.352421 kubelet[3683]: E0114 00:04:25.352403 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.361667 containerd[2133]: time="2026-01-14T00:04:25.360862181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9txwq,Uid:e84b79e7-44ef-4828-bbaa-4e1aa0ae6732,Namespace:calico-system,Attempt:0,} returns sandbox id \"0113002abbb724f5b783ea9789789ec5d8a728ebcc9b05ecf42478d91f833a08\"" Jan 14 00:04:25.362378 kubelet[3683]: E0114 00:04:25.362358 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:25.362452 kubelet[3683]: W0114 00:04:25.362441 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:25.362512 kubelet[3683]: E0114 00:04:25.362501 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:25.741000 audit[4247]: NETFILTER_CFG table=filter:120 family=2 entries=22 op=nft_register_rule pid=4247 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:25.741000 audit[4247]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffde9840c0 a2=0 a3=1 items=0 ppid=3789 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:25.741000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:25.748000 audit[4247]: NETFILTER_CFG table=nat:121 family=2 entries=12 op=nft_register_rule pid=4247 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:25.748000 audit[4247]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffde9840c0 a2=0 a3=1 items=0 ppid=3789 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:25.748000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:26.381798 kubelet[3683]: E0114 00:04:26.381051 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgz55" podUID="f0601279-098f-420b-84a8-b4028d2c0ea2" Jan 14 00:04:26.621765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3483542763.mount: Deactivated successfully. Jan 14 00:04:27.561610 containerd[2133]: time="2026-01-14T00:04:27.561404190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:04:27.564824 containerd[2133]: time="2026-01-14T00:04:27.564767640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Jan 14 00:04:27.567935 containerd[2133]: time="2026-01-14T00:04:27.567885348Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:04:27.573405 containerd[2133]: time="2026-01-14T00:04:27.573238361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:04:27.573682 containerd[2133]: time="2026-01-14T00:04:27.573594033Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.282399746s" Jan 14 00:04:27.573682 containerd[2133]: time="2026-01-14T00:04:27.573625242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 14 00:04:27.575375 containerd[2133]: time="2026-01-14T00:04:27.575342511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 00:04:27.588987 containerd[2133]: time="2026-01-14T00:04:27.588948065Z" level=info msg="CreateContainer within sandbox \"7d6fd1fbe9a20cc7eaab5e3523f5d0f0e7a27cd4225fe07118ad99bf8d0a0eec\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 00:04:27.610414 containerd[2133]: time="2026-01-14T00:04:27.610337742Z" level=info msg="Container c955735f44be8a66804d2b65528dc516cb980ecac823b4af217c0d305e320657: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:04:27.626985 containerd[2133]: time="2026-01-14T00:04:27.626931746Z" level=info msg="CreateContainer within sandbox \"7d6fd1fbe9a20cc7eaab5e3523f5d0f0e7a27cd4225fe07118ad99bf8d0a0eec\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c955735f44be8a66804d2b65528dc516cb980ecac823b4af217c0d305e320657\"" Jan 14 00:04:27.628679 containerd[2133]: time="2026-01-14T00:04:27.627487694Z" level=info msg="StartContainer for \"c955735f44be8a66804d2b65528dc516cb980ecac823b4af217c0d305e320657\"" Jan 14 00:04:27.629794 containerd[2133]: time="2026-01-14T00:04:27.629766448Z" level=info msg="connecting to shim c955735f44be8a66804d2b65528dc516cb980ecac823b4af217c0d305e320657" address="unix:///run/containerd/s/3d97019d05413f35e3f37701df4f536dc798134412ca66ab8dd0b11f9c202ecc" protocol=ttrpc version=3 Jan 14 00:04:27.656471 systemd[1]: Started cri-containerd-c955735f44be8a66804d2b65528dc516cb980ecac823b4af217c0d305e320657.scope - libcontainer container c955735f44be8a66804d2b65528dc516cb980ecac823b4af217c0d305e320657. Jan 14 00:04:27.668000 audit: BPF prog-id=185 op=LOAD Jan 14 00:04:27.669000 audit: BPF prog-id=186 op=LOAD Jan 14 00:04:27.669000 audit[4258]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=4088 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:27.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353537333566343462653861363638303464326236353532386463 Jan 14 00:04:27.669000 audit: BPF prog-id=186 op=UNLOAD Jan 14 00:04:27.669000 audit[4258]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4088 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:27.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353537333566343462653861363638303464326236353532386463 Jan 14 00:04:27.669000 audit: BPF prog-id=187 op=LOAD Jan 14 00:04:27.669000 audit[4258]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=4088 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:27.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353537333566343462653861363638303464326236353532386463 Jan 14 00:04:27.669000 audit: BPF prog-id=188 op=LOAD Jan 14 00:04:27.669000 audit[4258]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=4088 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:27.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353537333566343462653861363638303464326236353532386463 Jan 14 00:04:27.669000 audit: BPF prog-id=188 op=UNLOAD Jan 14 00:04:27.669000 audit[4258]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4088 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:27.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353537333566343462653861363638303464326236353532386463 Jan 14 00:04:27.669000 audit: BPF prog-id=187 op=UNLOAD Jan 14 00:04:27.669000 audit[4258]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4088 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:27.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353537333566343462653861363638303464326236353532386463 Jan 14 00:04:27.670000 audit: BPF prog-id=189 op=LOAD Jan 14 00:04:27.670000 audit[4258]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=4088 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:27.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353537333566343462653861363638303464326236353532386463 Jan 14 00:04:27.701302 containerd[2133]: time="2026-01-14T00:04:27.701258350Z" level=info msg="StartContainer for \"c955735f44be8a66804d2b65528dc516cb980ecac823b4af217c0d305e320657\" returns successfully" Jan 14 00:04:28.381212 kubelet[3683]: E0114 00:04:28.380955 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgz55" podUID="f0601279-098f-420b-84a8-b4028d2c0ea2" Jan 14 00:04:28.474405 kubelet[3683]: I0114 00:04:28.474325 3683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-ff5767744-pblxp" podStartSLOduration=2.190409715 podStartE2EDuration="4.474304389s" podCreationTimestamp="2026-01-14 00:04:24 +0000 UTC" firstStartedPulling="2026-01-14 00:04:25.290814543 +0000 UTC m=+22.996325999" lastFinishedPulling="2026-01-14 00:04:27.574709217 +0000 UTC m=+25.280220673" observedRunningTime="2026-01-14 00:04:28.474147418 +0000 UTC m=+26.179658882" watchObservedRunningTime="2026-01-14 00:04:28.474304389 +0000 UTC m=+26.179815845" Jan 14 00:04:28.554977 kubelet[3683]: E0114 00:04:28.554887 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.554977 kubelet[3683]: W0114 00:04:28.554920 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.554977 kubelet[3683]: E0114 00:04:28.554940 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.555427 kubelet[3683]: E0114 00:04:28.555350 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.555427 kubelet[3683]: W0114 00:04:28.555364 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.555427 kubelet[3683]: E0114 00:04:28.555403 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.555674 kubelet[3683]: E0114 00:04:28.555664 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.555784 kubelet[3683]: W0114 00:04:28.555730 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.555784 kubelet[3683]: E0114 00:04:28.555744 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.556046 kubelet[3683]: E0114 00:04:28.556002 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.556046 kubelet[3683]: W0114 00:04:28.556013 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.556046 kubelet[3683]: E0114 00:04:28.556023 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.556354 kubelet[3683]: E0114 00:04:28.556305 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.556354 kubelet[3683]: W0114 00:04:28.556314 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.556354 kubelet[3683]: E0114 00:04:28.556322 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.556606 kubelet[3683]: E0114 00:04:28.556558 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.556606 kubelet[3683]: W0114 00:04:28.556567 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.556606 kubelet[3683]: E0114 00:04:28.556575 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.556850 kubelet[3683]: E0114 00:04:28.556800 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.556850 kubelet[3683]: W0114 00:04:28.556810 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.556850 kubelet[3683]: E0114 00:04:28.556818 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.557100 kubelet[3683]: E0114 00:04:28.557049 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.557100 kubelet[3683]: W0114 00:04:28.557059 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.557100 kubelet[3683]: E0114 00:04:28.557069 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.557357 kubelet[3683]: E0114 00:04:28.557337 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.557460 kubelet[3683]: W0114 00:04:28.557348 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.557460 kubelet[3683]: E0114 00:04:28.557426 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.557697 kubelet[3683]: E0114 00:04:28.557654 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.557697 kubelet[3683]: W0114 00:04:28.557664 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.557697 kubelet[3683]: E0114 00:04:28.557673 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.557936 kubelet[3683]: E0114 00:04:28.557904 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.557936 kubelet[3683]: W0114 00:04:28.557916 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.558039 kubelet[3683]: E0114 00:04:28.557926 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.558265 kubelet[3683]: E0114 00:04:28.558219 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.558265 kubelet[3683]: W0114 00:04:28.558228 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.558265 kubelet[3683]: E0114 00:04:28.558236 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.558539 kubelet[3683]: E0114 00:04:28.558491 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.558539 kubelet[3683]: W0114 00:04:28.558502 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.558539 kubelet[3683]: E0114 00:04:28.558511 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.558789 kubelet[3683]: E0114 00:04:28.558754 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.558789 kubelet[3683]: W0114 00:04:28.558764 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.558789 kubelet[3683]: E0114 00:04:28.558773 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.559109 kubelet[3683]: E0114 00:04:28.559059 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.559109 kubelet[3683]: W0114 00:04:28.559069 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.559109 kubelet[3683]: E0114 00:04:28.559077 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.567571 kubelet[3683]: E0114 00:04:28.567506 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.567571 kubelet[3683]: W0114 00:04:28.567524 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.567571 kubelet[3683]: E0114 00:04:28.567536 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.567772 kubelet[3683]: E0114 00:04:28.567758 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.567772 kubelet[3683]: W0114 00:04:28.567769 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.567832 kubelet[3683]: E0114 00:04:28.567781 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.567941 kubelet[3683]: E0114 00:04:28.567922 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.567941 kubelet[3683]: W0114 00:04:28.567931 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.567941 kubelet[3683]: E0114 00:04:28.567938 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.568137 kubelet[3683]: E0114 00:04:28.568123 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.568137 kubelet[3683]: W0114 00:04:28.568133 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.568306 kubelet[3683]: E0114 00:04:28.568144 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.568354 kubelet[3683]: E0114 00:04:28.568320 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.568354 kubelet[3683]: W0114 00:04:28.568327 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.568354 kubelet[3683]: E0114 00:04:28.568338 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.568459 kubelet[3683]: E0114 00:04:28.568448 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.568459 kubelet[3683]: W0114 00:04:28.568455 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.568459 kubelet[3683]: E0114 00:04:28.568464 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.568732 kubelet[3683]: E0114 00:04:28.568716 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.568788 kubelet[3683]: W0114 00:04:28.568776 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.568848 kubelet[3683]: E0114 00:04:28.568839 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.569111 kubelet[3683]: E0114 00:04:28.569047 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.569111 kubelet[3683]: W0114 00:04:28.569059 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.569111 kubelet[3683]: E0114 00:04:28.569078 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.569442 kubelet[3683]: E0114 00:04:28.569388 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.569442 kubelet[3683]: W0114 00:04:28.569402 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.569442 kubelet[3683]: E0114 00:04:28.569423 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.569721 kubelet[3683]: E0114 00:04:28.569649 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.569721 kubelet[3683]: W0114 00:04:28.569659 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.569721 kubelet[3683]: E0114 00:04:28.569677 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.570013 kubelet[3683]: E0114 00:04:28.569945 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.570013 kubelet[3683]: W0114 00:04:28.569957 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.570013 kubelet[3683]: E0114 00:04:28.569975 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.570256 kubelet[3683]: E0114 00:04:28.570246 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.570407 kubelet[3683]: W0114 00:04:28.570321 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.570407 kubelet[3683]: E0114 00:04:28.570349 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.570801 kubelet[3683]: E0114 00:04:28.570758 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.570801 kubelet[3683]: W0114 00:04:28.570768 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.570801 kubelet[3683]: E0114 00:04:28.570786 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.570970 kubelet[3683]: E0114 00:04:28.570955 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.570970 kubelet[3683]: W0114 00:04:28.570966 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.571041 kubelet[3683]: E0114 00:04:28.570977 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.571102 kubelet[3683]: E0114 00:04:28.571092 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.571102 kubelet[3683]: W0114 00:04:28.571099 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.571157 kubelet[3683]: E0114 00:04:28.571107 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.571235 kubelet[3683]: E0114 00:04:28.571222 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.571235 kubelet[3683]: W0114 00:04:28.571231 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.571280 kubelet[3683]: E0114 00:04:28.571237 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.571352 kubelet[3683]: E0114 00:04:28.571341 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.571352 kubelet[3683]: W0114 00:04:28.571348 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.571392 kubelet[3683]: E0114 00:04:28.571353 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.571590 kubelet[3683]: E0114 00:04:28.571577 3683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:04:28.571590 kubelet[3683]: W0114 00:04:28.571586 3683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:04:28.571642 kubelet[3683]: E0114 00:04:28.571593 3683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:04:28.914583 containerd[2133]: time="2026-01-14T00:04:28.914522097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:04:28.917848 containerd[2133]: time="2026-01-14T00:04:28.917781193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 14 00:04:28.921800 containerd[2133]: time="2026-01-14T00:04:28.921754960Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:04:28.926354 containerd[2133]: time="2026-01-14T00:04:28.926266738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:04:28.926922 containerd[2133]: time="2026-01-14T00:04:28.926618834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.351242762s" Jan 14 00:04:28.926922 containerd[2133]: time="2026-01-14T00:04:28.926653499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 14 00:04:28.930431 containerd[2133]: time="2026-01-14T00:04:28.930380973Z" level=info msg="CreateContainer within sandbox \"0113002abbb724f5b783ea9789789ec5d8a728ebcc9b05ecf42478d91f833a08\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 00:04:28.997094 containerd[2133]: time="2026-01-14T00:04:28.997038993Z" level=info msg="Container e8169f3333ecb4e0dc3a748ab41a47449e0307be4deefd5a0f7582fb004a90b2: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:04:29.020215 containerd[2133]: time="2026-01-14T00:04:29.020142107Z" level=info msg="CreateContainer within sandbox \"0113002abbb724f5b783ea9789789ec5d8a728ebcc9b05ecf42478d91f833a08\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e8169f3333ecb4e0dc3a748ab41a47449e0307be4deefd5a0f7582fb004a90b2\"" Jan 14 00:04:29.021401 containerd[2133]: time="2026-01-14T00:04:29.021372246Z" level=info msg="StartContainer for \"e8169f3333ecb4e0dc3a748ab41a47449e0307be4deefd5a0f7582fb004a90b2\"" Jan 14 00:04:29.023241 containerd[2133]: time="2026-01-14T00:04:29.023208606Z" level=info msg="connecting to shim e8169f3333ecb4e0dc3a748ab41a47449e0307be4deefd5a0f7582fb004a90b2" address="unix:///run/containerd/s/27a0e9155a050f8eef4bdf357f56396add2424617b2056b03f10ad59c60e84fd" protocol=ttrpc version=3 Jan 14 00:04:29.045336 systemd[1]: Started cri-containerd-e8169f3333ecb4e0dc3a748ab41a47449e0307be4deefd5a0f7582fb004a90b2.scope - libcontainer container e8169f3333ecb4e0dc3a748ab41a47449e0307be4deefd5a0f7582fb004a90b2. Jan 14 00:04:29.080000 audit: BPF prog-id=190 op=LOAD Jan 14 00:04:29.080000 audit[4336]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4181 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:29.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538313639663333333365636234653064633361373438616234316134 Jan 14 00:04:29.080000 audit: BPF prog-id=191 op=LOAD Jan 14 00:04:29.080000 audit[4336]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4181 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:29.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538313639663333333365636234653064633361373438616234316134 Jan 14 00:04:29.080000 audit: BPF prog-id=191 op=UNLOAD Jan 14 00:04:29.080000 audit[4336]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4181 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:29.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538313639663333333365636234653064633361373438616234316134 Jan 14 00:04:29.081000 audit: BPF prog-id=190 op=UNLOAD Jan 14 00:04:29.081000 audit[4336]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4181 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:29.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538313639663333333365636234653064633361373438616234316134 Jan 14 00:04:29.081000 audit: BPF prog-id=192 op=LOAD Jan 14 00:04:29.081000 audit[4336]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4181 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:29.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538313639663333333365636234653064633361373438616234316134 Jan 14 00:04:29.108397 containerd[2133]: time="2026-01-14T00:04:29.108306495Z" level=info msg="StartContainer for \"e8169f3333ecb4e0dc3a748ab41a47449e0307be4deefd5a0f7582fb004a90b2\" returns successfully" Jan 14 00:04:29.115695 systemd[1]: cri-containerd-e8169f3333ecb4e0dc3a748ab41a47449e0307be4deefd5a0f7582fb004a90b2.scope: Deactivated successfully. Jan 14 00:04:29.117000 audit: BPF prog-id=192 op=UNLOAD Jan 14 00:04:29.120148 containerd[2133]: time="2026-01-14T00:04:29.120025959Z" level=info msg="received container exit event container_id:\"e8169f3333ecb4e0dc3a748ab41a47449e0307be4deefd5a0f7582fb004a90b2\" id:\"e8169f3333ecb4e0dc3a748ab41a47449e0307be4deefd5a0f7582fb004a90b2\" pid:4350 exited_at:{seconds:1768349069 nanos:119699168}" Jan 14 00:04:29.141238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8169f3333ecb4e0dc3a748ab41a47449e0307be4deefd5a0f7582fb004a90b2-rootfs.mount: Deactivated successfully. Jan 14 00:04:29.463910 kubelet[3683]: I0114 00:04:29.463876 3683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 00:04:30.380384 kubelet[3683]: E0114 00:04:30.380143 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgz55" podUID="f0601279-098f-420b-84a8-b4028d2c0ea2" Jan 14 00:04:30.473508 containerd[2133]: time="2026-01-14T00:04:30.473050268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 00:04:32.382798 kubelet[3683]: E0114 00:04:32.382759 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgz55" podUID="f0601279-098f-420b-84a8-b4028d2c0ea2" Jan 14 00:04:33.655954 containerd[2133]: time="2026-01-14T00:04:33.655873915Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:04:33.661874 containerd[2133]: time="2026-01-14T00:04:33.661820812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 14 00:04:33.664881 containerd[2133]: time="2026-01-14T00:04:33.664825902Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:04:33.669390 containerd[2133]: time="2026-01-14T00:04:33.669293143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:04:33.669694 containerd[2133]: time="2026-01-14T00:04:33.669612588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.196524384s" Jan 14 00:04:33.669694 containerd[2133]: time="2026-01-14T00:04:33.669641981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 14 00:04:33.672009 containerd[2133]: time="2026-01-14T00:04:33.671974707Z" level=info msg="CreateContainer within sandbox \"0113002abbb724f5b783ea9789789ec5d8a728ebcc9b05ecf42478d91f833a08\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 00:04:33.697088 containerd[2133]: time="2026-01-14T00:04:33.696295682Z" level=info msg="Container b0f5d3b73a22e6796e5eef0b99c58e8e577cbb92dd629f2b22dec7d711d86f94: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:04:33.714134 containerd[2133]: time="2026-01-14T00:04:33.714078390Z" level=info msg="CreateContainer within sandbox \"0113002abbb724f5b783ea9789789ec5d8a728ebcc9b05ecf42478d91f833a08\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b0f5d3b73a22e6796e5eef0b99c58e8e577cbb92dd629f2b22dec7d711d86f94\"" Jan 14 00:04:33.714917 containerd[2133]: time="2026-01-14T00:04:33.714868955Z" level=info msg="StartContainer for \"b0f5d3b73a22e6796e5eef0b99c58e8e577cbb92dd629f2b22dec7d711d86f94\"" Jan 14 00:04:33.716498 containerd[2133]: time="2026-01-14T00:04:33.716469285Z" level=info msg="connecting to shim b0f5d3b73a22e6796e5eef0b99c58e8e577cbb92dd629f2b22dec7d711d86f94" address="unix:///run/containerd/s/27a0e9155a050f8eef4bdf357f56396add2424617b2056b03f10ad59c60e84fd" protocol=ttrpc version=3 Jan 14 00:04:33.737361 systemd[1]: Started cri-containerd-b0f5d3b73a22e6796e5eef0b99c58e8e577cbb92dd629f2b22dec7d711d86f94.scope - libcontainer container b0f5d3b73a22e6796e5eef0b99c58e8e577cbb92dd629f2b22dec7d711d86f94. Jan 14 00:04:33.789000 audit: BPF prog-id=193 op=LOAD Jan 14 00:04:33.792915 kernel: kauditd_printk_skb: 84 callbacks suppressed Jan 14 00:04:33.793403 kernel: audit: type=1334 audit(1768349073.789:584): prog-id=193 op=LOAD Jan 14 00:04:33.789000 audit[4393]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4181 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:33.815433 kernel: audit: type=1300 audit(1768349073.789:584): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4181 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:33.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230663564336237336132326536373936653565656630623939633538 Jan 14 00:04:33.835290 kernel: audit: type=1327 audit(1768349073.789:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230663564336237336132326536373936653565656630623939633538 Jan 14 00:04:33.790000 audit: BPF prog-id=194 op=LOAD Jan 14 00:04:33.840206 kernel: audit: type=1334 audit(1768349073.790:585): prog-id=194 op=LOAD Jan 14 00:04:33.790000 audit[4393]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4181 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:33.858206 kernel: audit: type=1300 audit(1768349073.790:585): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4181 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:33.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230663564336237336132326536373936653565656630623939633538 Jan 14 00:04:33.875536 kernel: audit: type=1327 audit(1768349073.790:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230663564336237336132326536373936653565656630623939633538 Jan 14 00:04:33.793000 audit: BPF prog-id=194 op=UNLOAD Jan 14 00:04:33.886664 kernel: audit: type=1334 audit(1768349073.793:586): prog-id=194 op=UNLOAD Jan 14 00:04:33.793000 audit[4393]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4181 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:33.904740 kernel: audit: type=1300 audit(1768349073.793:586): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4181 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:33.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230663564336237336132326536373936653565656630623939633538 Jan 14 00:04:33.922575 kernel: audit: type=1327 audit(1768349073.793:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230663564336237336132326536373936653565656630623939633538 Jan 14 00:04:33.793000 audit: BPF prog-id=193 op=UNLOAD Jan 14 00:04:33.927904 kernel: audit: type=1334 audit(1768349073.793:587): prog-id=193 op=UNLOAD Jan 14 00:04:33.793000 audit[4393]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4181 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:33.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230663564336237336132326536373936653565656630623939633538 Jan 14 00:04:33.793000 audit: BPF prog-id=195 op=LOAD Jan 14 00:04:33.793000 audit[4393]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4181 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:33.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230663564336237336132326536373936653565656630623939633538 Jan 14 00:04:33.952042 containerd[2133]: time="2026-01-14T00:04:33.952009305Z" level=info msg="StartContainer for \"b0f5d3b73a22e6796e5eef0b99c58e8e577cbb92dd629f2b22dec7d711d86f94\" returns successfully" Jan 14 00:04:34.381174 kubelet[3683]: E0114 00:04:34.380611 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgz55" podUID="f0601279-098f-420b-84a8-b4028d2c0ea2" Jan 14 00:04:35.079494 containerd[2133]: time="2026-01-14T00:04:35.079444352Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 00:04:35.081662 systemd[1]: cri-containerd-b0f5d3b73a22e6796e5eef0b99c58e8e577cbb92dd629f2b22dec7d711d86f94.scope: Deactivated successfully. Jan 14 00:04:35.081926 systemd[1]: cri-containerd-b0f5d3b73a22e6796e5eef0b99c58e8e577cbb92dd629f2b22dec7d711d86f94.scope: Consumed 339ms CPU time, 184.9M memory peak, 165.9M written to disk. Jan 14 00:04:35.083438 containerd[2133]: time="2026-01-14T00:04:35.083320482Z" level=info msg="received container exit event container_id:\"b0f5d3b73a22e6796e5eef0b99c58e8e577cbb92dd629f2b22dec7d711d86f94\" id:\"b0f5d3b73a22e6796e5eef0b99c58e8e577cbb92dd629f2b22dec7d711d86f94\" pid:4406 exited_at:{seconds:1768349075 nanos:82842392}" Jan 14 00:04:35.085000 audit: BPF prog-id=195 op=UNLOAD Jan 14 00:04:35.105914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0f5d3b73a22e6796e5eef0b99c58e8e577cbb92dd629f2b22dec7d711d86f94-rootfs.mount: Deactivated successfully. Jan 14 00:04:35.122348 kubelet[3683]: I0114 00:04:35.122324 3683 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 00:04:35.473470 kubelet[3683]: I0114 00:04:35.213453 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52895973-a9d8-41ff-890b-151c819ea908-tigera-ca-bundle\") pod \"calico-kube-controllers-79cf4bbcf4-jhdlg\" (UID: \"52895973-a9d8-41ff-890b-151c819ea908\") " pod="calico-system/calico-kube-controllers-79cf4bbcf4-jhdlg" Jan 14 00:04:35.473470 kubelet[3683]: I0114 00:04:35.213484 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/804e2c94-fdab-4f15-b317-31c3221bea29-config-volume\") pod \"coredns-668d6bf9bc-6zdb5\" (UID: \"804e2c94-fdab-4f15-b317-31c3221bea29\") " pod="kube-system/coredns-668d6bf9bc-6zdb5" Jan 14 00:04:35.473470 kubelet[3683]: I0114 00:04:35.213500 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd23407e-e7fa-43bd-b827-67d8fab88d3b-config\") pod \"goldmane-666569f655-pdm6c\" (UID: \"dd23407e-e7fa-43bd-b827-67d8fab88d3b\") " pod="calico-system/goldmane-666569f655-pdm6c" Jan 14 00:04:35.473470 kubelet[3683]: I0114 00:04:35.213513 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/dd23407e-e7fa-43bd-b827-67d8fab88d3b-goldmane-key-pair\") pod \"goldmane-666569f655-pdm6c\" (UID: \"dd23407e-e7fa-43bd-b827-67d8fab88d3b\") " pod="calico-system/goldmane-666569f655-pdm6c" Jan 14 00:04:35.473470 kubelet[3683]: I0114 00:04:35.213527 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7bbf\" (UniqueName: \"kubernetes.io/projected/804e2c94-fdab-4f15-b317-31c3221bea29-kube-api-access-f7bbf\") pod \"coredns-668d6bf9bc-6zdb5\" (UID: \"804e2c94-fdab-4f15-b317-31c3221bea29\") " pod="kube-system/coredns-668d6bf9bc-6zdb5" Jan 14 00:04:35.164707 systemd[1]: Created slice kubepods-burstable-pod804e2c94_fdab_4f15_b317_31c3221bea29.slice - libcontainer container kubepods-burstable-pod804e2c94_fdab_4f15_b317_31c3221bea29.slice. Jan 14 00:04:35.474978 kubelet[3683]: I0114 00:04:35.213537 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/102726bc-c7ae-4ed2-8671-64d4ae366366-whisker-backend-key-pair\") pod \"whisker-59ff59b699-dqltd\" (UID: \"102726bc-c7ae-4ed2-8671-64d4ae366366\") " pod="calico-system/whisker-59ff59b699-dqltd" Jan 14 00:04:35.474978 kubelet[3683]: I0114 00:04:35.213552 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fplcm\" (UniqueName: \"kubernetes.io/projected/a414c0bf-06ad-47c5-a4fa-4356f045a557-kube-api-access-fplcm\") pod \"coredns-668d6bf9bc-rp5x8\" (UID: \"a414c0bf-06ad-47c5-a4fa-4356f045a557\") " pod="kube-system/coredns-668d6bf9bc-rp5x8" Jan 14 00:04:35.474978 kubelet[3683]: I0114 00:04:35.213565 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xl4w\" (UniqueName: \"kubernetes.io/projected/dd23407e-e7fa-43bd-b827-67d8fab88d3b-kube-api-access-7xl4w\") pod \"goldmane-666569f655-pdm6c\" (UID: \"dd23407e-e7fa-43bd-b827-67d8fab88d3b\") " pod="calico-system/goldmane-666569f655-pdm6c" Jan 14 00:04:35.474978 kubelet[3683]: I0114 00:04:35.213576 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a414c0bf-06ad-47c5-a4fa-4356f045a557-config-volume\") pod \"coredns-668d6bf9bc-rp5x8\" (UID: \"a414c0bf-06ad-47c5-a4fa-4356f045a557\") " pod="kube-system/coredns-668d6bf9bc-rp5x8" Jan 14 00:04:35.474978 kubelet[3683]: I0114 00:04:35.213587 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/23297b6d-ba28-4d3e-a11a-c82aeab97bbe-calico-apiserver-certs\") pod \"calico-apiserver-766dfc88bb-vxthm\" (UID: \"23297b6d-ba28-4d3e-a11a-c82aeab97bbe\") " pod="calico-apiserver/calico-apiserver-766dfc88bb-vxthm" Jan 14 00:04:35.175546 systemd[1]: Created slice kubepods-besteffort-pod102726bc_c7ae_4ed2_8671_64d4ae366366.slice - libcontainer container kubepods-besteffort-pod102726bc_c7ae_4ed2_8671_64d4ae366366.slice. Jan 14 00:04:35.475332 kubelet[3683]: I0114 00:04:35.213599 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd23407e-e7fa-43bd-b827-67d8fab88d3b-goldmane-ca-bundle\") pod \"goldmane-666569f655-pdm6c\" (UID: \"dd23407e-e7fa-43bd-b827-67d8fab88d3b\") " pod="calico-system/goldmane-666569f655-pdm6c" Jan 14 00:04:35.475332 kubelet[3683]: I0114 00:04:35.213612 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/52b59e1e-92be-4298-96f1-1e43387d21fa-calico-apiserver-certs\") pod \"calico-apiserver-766dfc88bb-t7bd6\" (UID: \"52b59e1e-92be-4298-96f1-1e43387d21fa\") " pod="calico-apiserver/calico-apiserver-766dfc88bb-t7bd6" Jan 14 00:04:35.475332 kubelet[3683]: I0114 00:04:35.213621 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxtgn\" (UniqueName: \"kubernetes.io/projected/23297b6d-ba28-4d3e-a11a-c82aeab97bbe-kube-api-access-zxtgn\") pod \"calico-apiserver-766dfc88bb-vxthm\" (UID: \"23297b6d-ba28-4d3e-a11a-c82aeab97bbe\") " pod="calico-apiserver/calico-apiserver-766dfc88bb-vxthm" Jan 14 00:04:35.475332 kubelet[3683]: I0114 00:04:35.213633 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/102726bc-c7ae-4ed2-8671-64d4ae366366-whisker-ca-bundle\") pod \"whisker-59ff59b699-dqltd\" (UID: \"102726bc-c7ae-4ed2-8671-64d4ae366366\") " pod="calico-system/whisker-59ff59b699-dqltd" Jan 14 00:04:35.475332 kubelet[3683]: I0114 00:04:35.213643 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrzlq\" (UniqueName: \"kubernetes.io/projected/102726bc-c7ae-4ed2-8671-64d4ae366366-kube-api-access-zrzlq\") pod \"whisker-59ff59b699-dqltd\" (UID: \"102726bc-c7ae-4ed2-8671-64d4ae366366\") " pod="calico-system/whisker-59ff59b699-dqltd" Jan 14 00:04:35.186184 systemd[1]: Created slice kubepods-burstable-poda414c0bf_06ad_47c5_a4fa_4356f045a557.slice - libcontainer container kubepods-burstable-poda414c0bf_06ad_47c5_a4fa_4356f045a557.slice. Jan 14 00:04:35.475689 kubelet[3683]: I0114 00:04:35.213659 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttqb8\" (UniqueName: \"kubernetes.io/projected/52895973-a9d8-41ff-890b-151c819ea908-kube-api-access-ttqb8\") pod \"calico-kube-controllers-79cf4bbcf4-jhdlg\" (UID: \"52895973-a9d8-41ff-890b-151c819ea908\") " pod="calico-system/calico-kube-controllers-79cf4bbcf4-jhdlg" Jan 14 00:04:35.475689 kubelet[3683]: I0114 00:04:35.213669 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr9bj\" (UniqueName: \"kubernetes.io/projected/52b59e1e-92be-4298-96f1-1e43387d21fa-kube-api-access-sr9bj\") pod \"calico-apiserver-766dfc88bb-t7bd6\" (UID: \"52b59e1e-92be-4298-96f1-1e43387d21fa\") " pod="calico-apiserver/calico-apiserver-766dfc88bb-t7bd6" Jan 14 00:04:35.193561 systemd[1]: Created slice kubepods-besteffort-pod52895973_a9d8_41ff_890b_151c819ea908.slice - libcontainer container kubepods-besteffort-pod52895973_a9d8_41ff_890b_151c819ea908.slice. Jan 14 00:04:35.201058 systemd[1]: Created slice kubepods-besteffort-pod52b59e1e_92be_4298_96f1_1e43387d21fa.slice - libcontainer container kubepods-besteffort-pod52b59e1e_92be_4298_96f1_1e43387d21fa.slice. Jan 14 00:04:35.207930 systemd[1]: Created slice kubepods-besteffort-poddd23407e_e7fa_43bd_b827_67d8fab88d3b.slice - libcontainer container kubepods-besteffort-poddd23407e_e7fa_43bd_b827_67d8fab88d3b.slice. Jan 14 00:04:35.212888 systemd[1]: Created slice kubepods-besteffort-pod23297b6d_ba28_4d3e_a11a_c82aeab97bbe.slice - libcontainer container kubepods-besteffort-pod23297b6d_ba28_4d3e_a11a_c82aeab97bbe.slice. Jan 14 00:04:35.778551 containerd[2133]: time="2026-01-14T00:04:35.778402705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6zdb5,Uid:804e2c94-fdab-4f15-b317-31c3221bea29,Namespace:kube-system,Attempt:0,}" Jan 14 00:04:35.786055 containerd[2133]: time="2026-01-14T00:04:35.786016059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79cf4bbcf4-jhdlg,Uid:52895973-a9d8-41ff-890b-151c819ea908,Namespace:calico-system,Attempt:0,}" Jan 14 00:04:35.801473 containerd[2133]: time="2026-01-14T00:04:35.801391554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59ff59b699-dqltd,Uid:102726bc-c7ae-4ed2-8671-64d4ae366366,Namespace:calico-system,Attempt:0,}" Jan 14 00:04:35.801473 containerd[2133]: time="2026-01-14T00:04:35.801413875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766dfc88bb-t7bd6,Uid:52b59e1e-92be-4298-96f1-1e43387d21fa,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:04:35.801473 containerd[2133]: time="2026-01-14T00:04:35.801447788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766dfc88bb-vxthm,Uid:23297b6d-ba28-4d3e-a11a-c82aeab97bbe,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:04:35.801697 containerd[2133]: time="2026-01-14T00:04:35.801670304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pdm6c,Uid:dd23407e-e7fa-43bd-b827-67d8fab88d3b,Namespace:calico-system,Attempt:0,}" Jan 14 00:04:35.804378 containerd[2133]: time="2026-01-14T00:04:35.804350217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rp5x8,Uid:a414c0bf-06ad-47c5-a4fa-4356f045a557,Namespace:kube-system,Attempt:0,}" Jan 14 00:04:35.987330 containerd[2133]: time="2026-01-14T00:04:35.987283056Z" level=error msg="Failed to destroy network for sandbox \"1b3136283e675a938415c6af3d0f5d7d5553e0f2caec7e0e76fdddbccf87d48c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.020934 containerd[2133]: time="2026-01-14T00:04:36.020386881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rp5x8,Uid:a414c0bf-06ad-47c5-a4fa-4356f045a557,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b3136283e675a938415c6af3d0f5d7d5553e0f2caec7e0e76fdddbccf87d48c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.022526 kubelet[3683]: E0114 00:04:36.022258 3683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b3136283e675a938415c6af3d0f5d7d5553e0f2caec7e0e76fdddbccf87d48c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.022526 kubelet[3683]: E0114 00:04:36.022348 3683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b3136283e675a938415c6af3d0f5d7d5553e0f2caec7e0e76fdddbccf87d48c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rp5x8" Jan 14 00:04:36.022526 kubelet[3683]: E0114 00:04:36.022365 3683 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b3136283e675a938415c6af3d0f5d7d5553e0f2caec7e0e76fdddbccf87d48c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rp5x8" Jan 14 00:04:36.023258 kubelet[3683]: E0114 00:04:36.022402 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rp5x8_kube-system(a414c0bf-06ad-47c5-a4fa-4356f045a557)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rp5x8_kube-system(a414c0bf-06ad-47c5-a4fa-4356f045a557)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b3136283e675a938415c6af3d0f5d7d5553e0f2caec7e0e76fdddbccf87d48c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rp5x8" podUID="a414c0bf-06ad-47c5-a4fa-4356f045a557" Jan 14 00:04:36.098876 containerd[2133]: time="2026-01-14T00:04:36.098304468Z" level=error msg="Failed to destroy network for sandbox \"435028b3576ab2fcacc1acbf838a83230fffd2fcbcaf52064432556b42396688\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.112070 containerd[2133]: time="2026-01-14T00:04:36.111872317Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6zdb5,Uid:804e2c94-fdab-4f15-b317-31c3221bea29,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"435028b3576ab2fcacc1acbf838a83230fffd2fcbcaf52064432556b42396688\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.112435 kubelet[3683]: E0114 00:04:36.112105 3683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"435028b3576ab2fcacc1acbf838a83230fffd2fcbcaf52064432556b42396688\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.112435 kubelet[3683]: E0114 00:04:36.112228 3683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"435028b3576ab2fcacc1acbf838a83230fffd2fcbcaf52064432556b42396688\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6zdb5" Jan 14 00:04:36.112435 kubelet[3683]: E0114 00:04:36.112247 3683 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"435028b3576ab2fcacc1acbf838a83230fffd2fcbcaf52064432556b42396688\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6zdb5" Jan 14 00:04:36.113291 kubelet[3683]: E0114 00:04:36.112299 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6zdb5_kube-system(804e2c94-fdab-4f15-b317-31c3221bea29)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6zdb5_kube-system(804e2c94-fdab-4f15-b317-31c3221bea29)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"435028b3576ab2fcacc1acbf838a83230fffd2fcbcaf52064432556b42396688\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6zdb5" podUID="804e2c94-fdab-4f15-b317-31c3221bea29" Jan 14 00:04:36.125390 containerd[2133]: time="2026-01-14T00:04:36.125146871Z" level=error msg="Failed to destroy network for sandbox \"0a1361058bd43053bd6832cefc16b7a2024c2c9b325d61c2534038d18317468e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.127327 containerd[2133]: time="2026-01-14T00:04:36.127082640Z" level=error msg="Failed to destroy network for sandbox \"6ed4b99d45f1691b1f2e7325c6f55926acef400840e1450499586ca2a95236c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.127949 systemd[1]: run-netns-cni\x2d2b65ab62\x2d87bf\x2d949e\x2d0a72\x2d62f0b085e085.mount: Deactivated successfully. Jan 14 00:04:36.130043 systemd[1]: run-netns-cni\x2d5e845715\x2d4afe\x2d6f8d\x2daf6d\x2d683196c7615a.mount: Deactivated successfully. Jan 14 00:04:36.131033 containerd[2133]: time="2026-01-14T00:04:36.130995772Z" level=error msg="Failed to destroy network for sandbox \"afd25af9da0e95ef8f358e50598f8cc98ebac86daf8562c61f0642279f2fe294\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.132624 systemd[1]: run-netns-cni\x2d55a3a6f3\x2d3cdd\x2df1a5\x2d0d39\x2d01cffaf0e72a.mount: Deactivated successfully. Jan 14 00:04:36.137004 containerd[2133]: time="2026-01-14T00:04:36.136953450Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79cf4bbcf4-jhdlg,Uid:52895973-a9d8-41ff-890b-151c819ea908,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1361058bd43053bd6832cefc16b7a2024c2c9b325d61c2534038d18317468e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.137259 kubelet[3683]: E0114 00:04:36.137201 3683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1361058bd43053bd6832cefc16b7a2024c2c9b325d61c2534038d18317468e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.137316 kubelet[3683]: E0114 00:04:36.137282 3683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1361058bd43053bd6832cefc16b7a2024c2c9b325d61c2534038d18317468e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79cf4bbcf4-jhdlg" Jan 14 00:04:36.137316 kubelet[3683]: E0114 00:04:36.137299 3683 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1361058bd43053bd6832cefc16b7a2024c2c9b325d61c2534038d18317468e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79cf4bbcf4-jhdlg" Jan 14 00:04:36.137375 kubelet[3683]: E0114 00:04:36.137342 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79cf4bbcf4-jhdlg_calico-system(52895973-a9d8-41ff-890b-151c819ea908)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79cf4bbcf4-jhdlg_calico-system(52895973-a9d8-41ff-890b-151c819ea908)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a1361058bd43053bd6832cefc16b7a2024c2c9b325d61c2534038d18317468e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79cf4bbcf4-jhdlg" podUID="52895973-a9d8-41ff-890b-151c819ea908" Jan 14 00:04:36.140350 containerd[2133]: time="2026-01-14T00:04:36.138420770Z" level=error msg="Failed to destroy network for sandbox \"8119f87662d75459998aa00ca94a36205fbdda21b5572eb555fcaac90701cc0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.140134 systemd[1]: run-netns-cni\x2d1f55f711\x2de78c\x2d5baa\x2d84ae\x2ddacc12e42216.mount: Deactivated successfully. Jan 14 00:04:36.146469 containerd[2133]: time="2026-01-14T00:04:36.146417956Z" level=error msg="Failed to destroy network for sandbox \"40ed541fabebc3e135c6f6e975ee79087e0184643704338a923cb8d3fcdf1a1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.150643 containerd[2133]: time="2026-01-14T00:04:36.150597845Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pdm6c,Uid:dd23407e-e7fa-43bd-b827-67d8fab88d3b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ed4b99d45f1691b1f2e7325c6f55926acef400840e1450499586ca2a95236c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.150869 kubelet[3683]: E0114 00:04:36.150814 3683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ed4b99d45f1691b1f2e7325c6f55926acef400840e1450499586ca2a95236c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.150932 kubelet[3683]: E0114 00:04:36.150912 3683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ed4b99d45f1691b1f2e7325c6f55926acef400840e1450499586ca2a95236c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-pdm6c" Jan 14 00:04:36.150932 kubelet[3683]: E0114 00:04:36.150928 3683 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ed4b99d45f1691b1f2e7325c6f55926acef400840e1450499586ca2a95236c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-pdm6c" Jan 14 00:04:36.151018 kubelet[3683]: E0114 00:04:36.150991 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-pdm6c_calico-system(dd23407e-e7fa-43bd-b827-67d8fab88d3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-pdm6c_calico-system(dd23407e-e7fa-43bd-b827-67d8fab88d3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ed4b99d45f1691b1f2e7325c6f55926acef400840e1450499586ca2a95236c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-pdm6c" podUID="dd23407e-e7fa-43bd-b827-67d8fab88d3b" Jan 14 00:04:36.153550 containerd[2133]: time="2026-01-14T00:04:36.153510923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766dfc88bb-t7bd6,Uid:52b59e1e-92be-4298-96f1-1e43387d21fa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"afd25af9da0e95ef8f358e50598f8cc98ebac86daf8562c61f0642279f2fe294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.153845 kubelet[3683]: E0114 00:04:36.153814 3683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afd25af9da0e95ef8f358e50598f8cc98ebac86daf8562c61f0642279f2fe294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.153896 kubelet[3683]: E0114 00:04:36.153855 3683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afd25af9da0e95ef8f358e50598f8cc98ebac86daf8562c61f0642279f2fe294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-766dfc88bb-t7bd6" Jan 14 00:04:36.153924 kubelet[3683]: E0114 00:04:36.153870 3683 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afd25af9da0e95ef8f358e50598f8cc98ebac86daf8562c61f0642279f2fe294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-766dfc88bb-t7bd6" Jan 14 00:04:36.154192 kubelet[3683]: E0114 00:04:36.153935 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-766dfc88bb-t7bd6_calico-apiserver(52b59e1e-92be-4298-96f1-1e43387d21fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-766dfc88bb-t7bd6_calico-apiserver(52b59e1e-92be-4298-96f1-1e43387d21fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afd25af9da0e95ef8f358e50598f8cc98ebac86daf8562c61f0642279f2fe294\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-t7bd6" podUID="52b59e1e-92be-4298-96f1-1e43387d21fa" Jan 14 00:04:36.159823 containerd[2133]: time="2026-01-14T00:04:36.159632565Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766dfc88bb-vxthm,Uid:23297b6d-ba28-4d3e-a11a-c82aeab97bbe,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8119f87662d75459998aa00ca94a36205fbdda21b5572eb555fcaac90701cc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.159933 kubelet[3683]: E0114 00:04:36.159897 3683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8119f87662d75459998aa00ca94a36205fbdda21b5572eb555fcaac90701cc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.159966 kubelet[3683]: E0114 00:04:36.159945 3683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8119f87662d75459998aa00ca94a36205fbdda21b5572eb555fcaac90701cc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-766dfc88bb-vxthm" Jan 14 00:04:36.159966 kubelet[3683]: E0114 00:04:36.159960 3683 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8119f87662d75459998aa00ca94a36205fbdda21b5572eb555fcaac90701cc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-766dfc88bb-vxthm" Jan 14 00:04:36.160010 kubelet[3683]: E0114 00:04:36.159992 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-766dfc88bb-vxthm_calico-apiserver(23297b6d-ba28-4d3e-a11a-c82aeab97bbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-766dfc88bb-vxthm_calico-apiserver(23297b6d-ba28-4d3e-a11a-c82aeab97bbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8119f87662d75459998aa00ca94a36205fbdda21b5572eb555fcaac90701cc0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-vxthm" podUID="23297b6d-ba28-4d3e-a11a-c82aeab97bbe" Jan 14 00:04:36.163227 containerd[2133]: time="2026-01-14T00:04:36.163188505Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59ff59b699-dqltd,Uid:102726bc-c7ae-4ed2-8671-64d4ae366366,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ed541fabebc3e135c6f6e975ee79087e0184643704338a923cb8d3fcdf1a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.163480 kubelet[3683]: E0114 00:04:36.163446 3683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ed541fabebc3e135c6f6e975ee79087e0184643704338a923cb8d3fcdf1a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.163480 kubelet[3683]: E0114 00:04:36.163486 3683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ed541fabebc3e135c6f6e975ee79087e0184643704338a923cb8d3fcdf1a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59ff59b699-dqltd" Jan 14 00:04:36.163562 kubelet[3683]: E0114 00:04:36.163499 3683 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ed541fabebc3e135c6f6e975ee79087e0184643704338a923cb8d3fcdf1a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59ff59b699-dqltd" Jan 14 00:04:36.163562 kubelet[3683]: E0114 00:04:36.163522 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-59ff59b699-dqltd_calico-system(102726bc-c7ae-4ed2-8671-64d4ae366366)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-59ff59b699-dqltd_calico-system(102726bc-c7ae-4ed2-8671-64d4ae366366)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40ed541fabebc3e135c6f6e975ee79087e0184643704338a923cb8d3fcdf1a1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59ff59b699-dqltd" podUID="102726bc-c7ae-4ed2-8671-64d4ae366366" Jan 14 00:04:36.387202 systemd[1]: Created slice kubepods-besteffort-podf0601279_098f_420b_84a8_b4028d2c0ea2.slice - libcontainer container kubepods-besteffort-podf0601279_098f_420b_84a8_b4028d2c0ea2.slice. Jan 14 00:04:36.389861 containerd[2133]: time="2026-01-14T00:04:36.389819258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgz55,Uid:f0601279-098f-420b-84a8-b4028d2c0ea2,Namespace:calico-system,Attempt:0,}" Jan 14 00:04:36.431698 containerd[2133]: time="2026-01-14T00:04:36.431646773Z" level=error msg="Failed to destroy network for sandbox \"283933c446f4089e921ace68b8d939e807f1381d7a40b53d61e5469ffc274f65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.439493 containerd[2133]: time="2026-01-14T00:04:36.439444747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgz55,Uid:f0601279-098f-420b-84a8-b4028d2c0ea2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"283933c446f4089e921ace68b8d939e807f1381d7a40b53d61e5469ffc274f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.439764 kubelet[3683]: E0114 00:04:36.439716 3683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"283933c446f4089e921ace68b8d939e807f1381d7a40b53d61e5469ffc274f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:04:36.439814 kubelet[3683]: E0114 00:04:36.439789 3683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"283933c446f4089e921ace68b8d939e807f1381d7a40b53d61e5469ffc274f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgz55" Jan 14 00:04:36.439814 kubelet[3683]: E0114 00:04:36.439805 3683 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"283933c446f4089e921ace68b8d939e807f1381d7a40b53d61e5469ffc274f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgz55" Jan 14 00:04:36.439876 kubelet[3683]: E0114 00:04:36.439849 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hgz55_calico-system(f0601279-098f-420b-84a8-b4028d2c0ea2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hgz55_calico-system(f0601279-098f-420b-84a8-b4028d2c0ea2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"283933c446f4089e921ace68b8d939e807f1381d7a40b53d61e5469ffc274f65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hgz55" podUID="f0601279-098f-420b-84a8-b4028d2c0ea2" Jan 14 00:04:36.518402 containerd[2133]: time="2026-01-14T00:04:36.518278689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 00:04:37.106274 systemd[1]: run-netns-cni\x2d130b684f\x2d7cdd\x2dbdb0\x2d2883\x2d8e9690e38fa7.mount: Deactivated successfully. Jan 14 00:04:39.433168 kubelet[3683]: I0114 00:04:39.433124 3683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 00:04:39.464000 audit[4672]: NETFILTER_CFG table=filter:122 family=2 entries=21 op=nft_register_rule pid=4672 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:39.469196 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 00:04:39.469303 kernel: audit: type=1325 audit(1768349079.464:590): table=filter:122 family=2 entries=21 op=nft_register_rule pid=4672 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:39.464000 audit[4672]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc2dd2b20 a2=0 a3=1 items=0 ppid=3789 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:39.499691 kernel: audit: type=1300 audit(1768349079.464:590): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc2dd2b20 a2=0 a3=1 items=0 ppid=3789 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:39.464000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:39.509300 kernel: audit: type=1327 audit(1768349079.464:590): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:39.480000 audit[4672]: NETFILTER_CFG table=nat:123 family=2 entries=19 op=nft_register_chain pid=4672 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:39.519136 kernel: audit: type=1325 audit(1768349079.480:591): table=nat:123 family=2 entries=19 op=nft_register_chain pid=4672 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:39.480000 audit[4672]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffc2dd2b20 a2=0 a3=1 items=0 ppid=3789 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:39.537702 kernel: audit: type=1300 audit(1768349079.480:591): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffc2dd2b20 a2=0 a3=1 items=0 ppid=3789 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:39.480000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:39.547913 kernel: audit: type=1327 audit(1768349079.480:591): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:42.883462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2897011915.mount: Deactivated successfully. Jan 14 00:04:43.126776 containerd[2133]: time="2026-01-14T00:04:43.126704887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:04:43.129638 containerd[2133]: time="2026-01-14T00:04:43.129471706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150931442" Jan 14 00:04:43.132424 containerd[2133]: time="2026-01-14T00:04:43.132395017Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:04:43.136558 containerd[2133]: time="2026-01-14T00:04:43.136250819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:04:43.136856 containerd[2133]: time="2026-01-14T00:04:43.136779678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.618112869s" Jan 14 00:04:43.136946 containerd[2133]: time="2026-01-14T00:04:43.136928873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 14 00:04:43.154349 containerd[2133]: time="2026-01-14T00:04:43.154219562Z" level=info msg="CreateContainer within sandbox \"0113002abbb724f5b783ea9789789ec5d8a728ebcc9b05ecf42478d91f833a08\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 00:04:43.180080 containerd[2133]: time="2026-01-14T00:04:43.179797611Z" level=info msg="Container 2899d327d636a6d72a6d1c4353894060351b3a87d9c114d90d39863ee2f432c2: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:04:43.197577 containerd[2133]: time="2026-01-14T00:04:43.197444755Z" level=info msg="CreateContainer within sandbox \"0113002abbb724f5b783ea9789789ec5d8a728ebcc9b05ecf42478d91f833a08\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2899d327d636a6d72a6d1c4353894060351b3a87d9c114d90d39863ee2f432c2\"" Jan 14 00:04:43.198340 containerd[2133]: time="2026-01-14T00:04:43.198311341Z" level=info msg="StartContainer for \"2899d327d636a6d72a6d1c4353894060351b3a87d9c114d90d39863ee2f432c2\"" Jan 14 00:04:43.199901 containerd[2133]: time="2026-01-14T00:04:43.199871543Z" level=info msg="connecting to shim 2899d327d636a6d72a6d1c4353894060351b3a87d9c114d90d39863ee2f432c2" address="unix:///run/containerd/s/27a0e9155a050f8eef4bdf357f56396add2424617b2056b03f10ad59c60e84fd" protocol=ttrpc version=3 Jan 14 00:04:43.237356 systemd[1]: Started cri-containerd-2899d327d636a6d72a6d1c4353894060351b3a87d9c114d90d39863ee2f432c2.scope - libcontainer container 2899d327d636a6d72a6d1c4353894060351b3a87d9c114d90d39863ee2f432c2. Jan 14 00:04:43.283000 audit: BPF prog-id=196 op=LOAD Jan 14 00:04:43.307689 kernel: audit: type=1334 audit(1768349083.283:592): prog-id=196 op=LOAD Jan 14 00:04:43.307960 kernel: audit: type=1300 audit(1768349083.283:592): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4181 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:43.283000 audit[4679]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4181 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:43.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238393964333237643633366136643732613664316334333533383934 Jan 14 00:04:43.325619 kernel: audit: type=1327 audit(1768349083.283:592): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238393964333237643633366136643732613664316334333533383934 Jan 14 00:04:43.288000 audit: BPF prog-id=197 op=LOAD Jan 14 00:04:43.331101 kernel: audit: type=1334 audit(1768349083.288:593): prog-id=197 op=LOAD Jan 14 00:04:43.288000 audit[4679]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4181 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:43.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238393964333237643633366136643732613664316334333533383934 Jan 14 00:04:43.288000 audit: BPF prog-id=197 op=UNLOAD Jan 14 00:04:43.288000 audit[4679]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4181 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:43.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238393964333237643633366136643732613664316334333533383934 Jan 14 00:04:43.288000 audit: BPF prog-id=196 op=UNLOAD Jan 14 00:04:43.288000 audit[4679]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4181 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:43.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238393964333237643633366136643732613664316334333533383934 Jan 14 00:04:43.288000 audit: BPF prog-id=198 op=LOAD Jan 14 00:04:43.288000 audit[4679]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4181 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:43.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238393964333237643633366136643732613664316334333533383934 Jan 14 00:04:43.352955 containerd[2133]: time="2026-01-14T00:04:43.352919148Z" level=info msg="StartContainer for \"2899d327d636a6d72a6d1c4353894060351b3a87d9c114d90d39863ee2f432c2\" returns successfully" Jan 14 00:04:43.569229 kubelet[3683]: I0114 00:04:43.569075 3683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9txwq" podStartSLOduration=1.788442325 podStartE2EDuration="19.568282929s" podCreationTimestamp="2026-01-14 00:04:24 +0000 UTC" firstStartedPulling="2026-01-14 00:04:25.362216931 +0000 UTC m=+23.067728395" lastFinishedPulling="2026-01-14 00:04:43.142057543 +0000 UTC m=+40.847568999" observedRunningTime="2026-01-14 00:04:43.564140593 +0000 UTC m=+41.269652065" watchObservedRunningTime="2026-01-14 00:04:43.568282929 +0000 UTC m=+41.273794393" Jan 14 00:04:43.599100 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 00:04:43.599235 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 00:04:43.769373 kubelet[3683]: I0114 00:04:43.769331 3683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrzlq\" (UniqueName: \"kubernetes.io/projected/102726bc-c7ae-4ed2-8671-64d4ae366366-kube-api-access-zrzlq\") pod \"102726bc-c7ae-4ed2-8671-64d4ae366366\" (UID: \"102726bc-c7ae-4ed2-8671-64d4ae366366\") " Jan 14 00:04:43.769373 kubelet[3683]: I0114 00:04:43.769390 3683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/102726bc-c7ae-4ed2-8671-64d4ae366366-whisker-ca-bundle\") pod \"102726bc-c7ae-4ed2-8671-64d4ae366366\" (UID: \"102726bc-c7ae-4ed2-8671-64d4ae366366\") " Jan 14 00:04:43.769605 kubelet[3683]: I0114 00:04:43.769422 3683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/102726bc-c7ae-4ed2-8671-64d4ae366366-whisker-backend-key-pair\") pod \"102726bc-c7ae-4ed2-8671-64d4ae366366\" (UID: \"102726bc-c7ae-4ed2-8671-64d4ae366366\") " Jan 14 00:04:43.776256 kubelet[3683]: I0114 00:04:43.776059 3683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/102726bc-c7ae-4ed2-8671-64d4ae366366-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "102726bc-c7ae-4ed2-8671-64d4ae366366" (UID: "102726bc-c7ae-4ed2-8671-64d4ae366366"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 00:04:43.776954 kubelet[3683]: I0114 00:04:43.776765 3683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/102726bc-c7ae-4ed2-8671-64d4ae366366-kube-api-access-zrzlq" (OuterVolumeSpecName: "kube-api-access-zrzlq") pod "102726bc-c7ae-4ed2-8671-64d4ae366366" (UID: "102726bc-c7ae-4ed2-8671-64d4ae366366"). InnerVolumeSpecName "kube-api-access-zrzlq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 00:04:43.778294 kubelet[3683]: I0114 00:04:43.778260 3683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/102726bc-c7ae-4ed2-8671-64d4ae366366-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "102726bc-c7ae-4ed2-8671-64d4ae366366" (UID: "102726bc-c7ae-4ed2-8671-64d4ae366366"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 00:04:43.870720 kubelet[3683]: I0114 00:04:43.870552 3683 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zrzlq\" (UniqueName: \"kubernetes.io/projected/102726bc-c7ae-4ed2-8671-64d4ae366366-kube-api-access-zrzlq\") on node \"ci-4547.0.0-n-d5ef04779b\" DevicePath \"\"" Jan 14 00:04:43.870720 kubelet[3683]: I0114 00:04:43.870586 3683 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/102726bc-c7ae-4ed2-8671-64d4ae366366-whisker-ca-bundle\") on node \"ci-4547.0.0-n-d5ef04779b\" DevicePath \"\"" Jan 14 00:04:43.870720 kubelet[3683]: I0114 00:04:43.870596 3683 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/102726bc-c7ae-4ed2-8671-64d4ae366366-whisker-backend-key-pair\") on node \"ci-4547.0.0-n-d5ef04779b\" DevicePath \"\"" Jan 14 00:04:43.885740 systemd[1]: var-lib-kubelet-pods-102726bc\x2dc7ae\x2d4ed2\x2d8671\x2d64d4ae366366-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 00:04:43.885829 systemd[1]: var-lib-kubelet-pods-102726bc\x2dc7ae\x2d4ed2\x2d8671\x2d64d4ae366366-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzrzlq.mount: Deactivated successfully. Jan 14 00:04:44.387492 systemd[1]: Removed slice kubepods-besteffort-pod102726bc_c7ae_4ed2_8671_64d4ae366366.slice - libcontainer container kubepods-besteffort-pod102726bc_c7ae_4ed2_8671_64d4ae366366.slice. Jan 14 00:04:44.618797 systemd[1]: Created slice kubepods-besteffort-podef75aa50_7d3a_4b9a_95a7_c344bbb8239a.slice - libcontainer container kubepods-besteffort-podef75aa50_7d3a_4b9a_95a7_c344bbb8239a.slice. Jan 14 00:04:44.675941 kubelet[3683]: I0114 00:04:44.675895 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef75aa50-7d3a-4b9a-95a7-c344bbb8239a-whisker-ca-bundle\") pod \"whisker-67f75d98f9-d4x2n\" (UID: \"ef75aa50-7d3a-4b9a-95a7-c344bbb8239a\") " pod="calico-system/whisker-67f75d98f9-d4x2n" Jan 14 00:04:44.676360 kubelet[3683]: I0114 00:04:44.675949 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdl2t\" (UniqueName: \"kubernetes.io/projected/ef75aa50-7d3a-4b9a-95a7-c344bbb8239a-kube-api-access-wdl2t\") pod \"whisker-67f75d98f9-d4x2n\" (UID: \"ef75aa50-7d3a-4b9a-95a7-c344bbb8239a\") " pod="calico-system/whisker-67f75d98f9-d4x2n" Jan 14 00:04:44.676360 kubelet[3683]: I0114 00:04:44.675983 3683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ef75aa50-7d3a-4b9a-95a7-c344bbb8239a-whisker-backend-key-pair\") pod \"whisker-67f75d98f9-d4x2n\" (UID: \"ef75aa50-7d3a-4b9a-95a7-c344bbb8239a\") " pod="calico-system/whisker-67f75d98f9-d4x2n" Jan 14 00:04:44.923997 containerd[2133]: time="2026-01-14T00:04:44.923962225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67f75d98f9-d4x2n,Uid:ef75aa50-7d3a-4b9a-95a7-c344bbb8239a,Namespace:calico-system,Attempt:0,}" Jan 14 00:04:45.098264 systemd-networkd[1723]: calif37426e3ed1: Link UP Jan 14 00:04:45.099735 systemd-networkd[1723]: calif37426e3ed1: Gained carrier Jan 14 00:04:45.118505 containerd[2133]: 2026-01-14 00:04:44.956 [INFO][4839] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 00:04:45.118505 containerd[2133]: 2026-01-14 00:04:45.002 [INFO][4839] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--d5ef04779b-k8s-whisker--67f75d98f9--d4x2n-eth0 whisker-67f75d98f9- calico-system ef75aa50-7d3a-4b9a-95a7-c344bbb8239a 874 0 2026-01-14 00:04:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:67f75d98f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547.0.0-n-d5ef04779b whisker-67f75d98f9-d4x2n eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif37426e3ed1 [] [] }} ContainerID="d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be" Namespace="calico-system" Pod="whisker-67f75d98f9-d4x2n" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-whisker--67f75d98f9--d4x2n-" Jan 14 00:04:45.118505 containerd[2133]: 2026-01-14 00:04:45.002 [INFO][4839] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be" Namespace="calico-system" Pod="whisker-67f75d98f9-d4x2n" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-whisker--67f75d98f9--d4x2n-eth0" Jan 14 00:04:45.118505 containerd[2133]: 2026-01-14 00:04:45.032 [INFO][4890] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be" HandleID="k8s-pod-network.d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be" Workload="ci--4547.0.0--n--d5ef04779b-k8s-whisker--67f75d98f9--d4x2n-eth0" Jan 14 00:04:45.118708 containerd[2133]: 2026-01-14 00:04:45.032 [INFO][4890] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be" HandleID="k8s-pod-network.d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be" Workload="ci--4547.0.0--n--d5ef04779b-k8s-whisker--67f75d98f9--d4x2n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3000), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-n-d5ef04779b", "pod":"whisker-67f75d98f9-d4x2n", "timestamp":"2026-01-14 00:04:45.032418184 +0000 UTC"}, Hostname:"ci-4547.0.0-n-d5ef04779b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:04:45.118708 containerd[2133]: 2026-01-14 00:04:45.032 [INFO][4890] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:04:45.118708 containerd[2133]: 2026-01-14 00:04:45.032 [INFO][4890] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:04:45.118708 containerd[2133]: 2026-01-14 00:04:45.033 [INFO][4890] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-d5ef04779b' Jan 14 00:04:45.118708 containerd[2133]: 2026-01-14 00:04:45.039 [INFO][4890] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:45.118708 containerd[2133]: 2026-01-14 00:04:45.042 [INFO][4890] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:45.118708 containerd[2133]: 2026-01-14 00:04:45.046 [INFO][4890] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:45.118708 containerd[2133]: 2026-01-14 00:04:45.048 [INFO][4890] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:45.118708 containerd[2133]: 2026-01-14 00:04:45.049 [INFO][4890] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:45.119511 containerd[2133]: 2026-01-14 00:04:45.049 [INFO][4890] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:45.119511 containerd[2133]: 2026-01-14 00:04:45.051 [INFO][4890] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be Jan 14 00:04:45.119511 containerd[2133]: 2026-01-14 00:04:45.055 [INFO][4890] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:45.119511 containerd[2133]: 2026-01-14 00:04:45.061 [INFO][4890] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.1/26] block=192.168.70.0/26 handle="k8s-pod-network.d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:45.119511 containerd[2133]: 2026-01-14 00:04:45.061 [INFO][4890] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.1/26] handle="k8s-pod-network.d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:45.119511 containerd[2133]: 2026-01-14 00:04:45.061 [INFO][4890] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:04:45.119511 containerd[2133]: 2026-01-14 00:04:45.061 [INFO][4890] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.1/26] IPv6=[] ContainerID="d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be" HandleID="k8s-pod-network.d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be" Workload="ci--4547.0.0--n--d5ef04779b-k8s-whisker--67f75d98f9--d4x2n-eth0" Jan 14 00:04:45.119609 containerd[2133]: 2026-01-14 00:04:45.064 [INFO][4839] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be" Namespace="calico-system" Pod="whisker-67f75d98f9-d4x2n" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-whisker--67f75d98f9--d4x2n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--d5ef04779b-k8s-whisker--67f75d98f9--d4x2n-eth0", GenerateName:"whisker-67f75d98f9-", Namespace:"calico-system", SelfLink:"", UID:"ef75aa50-7d3a-4b9a-95a7-c344bbb8239a", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 4, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67f75d98f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-d5ef04779b", ContainerID:"", Pod:"whisker-67f75d98f9-d4x2n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.70.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif37426e3ed1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:04:45.119609 containerd[2133]: 2026-01-14 00:04:45.064 [INFO][4839] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.1/32] ContainerID="d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be" Namespace="calico-system" Pod="whisker-67f75d98f9-d4x2n" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-whisker--67f75d98f9--d4x2n-eth0" Jan 14 00:04:45.119678 containerd[2133]: 2026-01-14 00:04:45.064 [INFO][4839] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif37426e3ed1 ContainerID="d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be" Namespace="calico-system" Pod="whisker-67f75d98f9-d4x2n" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-whisker--67f75d98f9--d4x2n-eth0" Jan 14 00:04:45.119678 containerd[2133]: 2026-01-14 00:04:45.098 [INFO][4839] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be" Namespace="calico-system" Pod="whisker-67f75d98f9-d4x2n" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-whisker--67f75d98f9--d4x2n-eth0" Jan 14 00:04:45.119708 containerd[2133]: 2026-01-14 00:04:45.098 [INFO][4839] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be" Namespace="calico-system" Pod="whisker-67f75d98f9-d4x2n" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-whisker--67f75d98f9--d4x2n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--d5ef04779b-k8s-whisker--67f75d98f9--d4x2n-eth0", GenerateName:"whisker-67f75d98f9-", Namespace:"calico-system", SelfLink:"", UID:"ef75aa50-7d3a-4b9a-95a7-c344bbb8239a", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 4, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67f75d98f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-d5ef04779b", ContainerID:"d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be", Pod:"whisker-67f75d98f9-d4x2n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.70.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif37426e3ed1", MAC:"9e:11:61:b4:f3:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:04:45.119741 containerd[2133]: 2026-01-14 00:04:45.115 [INFO][4839] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be" Namespace="calico-system" Pod="whisker-67f75d98f9-d4x2n" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-whisker--67f75d98f9--d4x2n-eth0" Jan 14 00:04:45.247879 containerd[2133]: time="2026-01-14T00:04:45.247660914Z" level=info msg="connecting to shim d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be" address="unix:///run/containerd/s/2c6b8d008ef7d9c032472c7fe73a037fcdceee8b947de5c69bd03c0ad2bbe1a1" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:04:45.275000 audit: BPF prog-id=199 op=LOAD Jan 14 00:04:45.286440 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 14 00:04:45.286561 kernel: audit: type=1334 audit(1768349085.275:597): prog-id=199 op=LOAD Jan 14 00:04:45.278413 systemd[1]: Started cri-containerd-d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be.scope - libcontainer container d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be. Jan 14 00:04:45.275000 audit[4969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff8ad51d8 a2=98 a3=fffff8ad51c8 items=0 ppid=4814 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.307030 kernel: audit: type=1300 audit(1768349085.275:597): arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff8ad51d8 a2=98 a3=fffff8ad51c8 items=0 ppid=4814 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.275000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:04:45.325564 kernel: audit: type=1327 audit(1768349085.275:597): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:04:45.275000 audit: BPF prog-id=199 op=UNLOAD Jan 14 00:04:45.330486 kernel: audit: type=1334 audit(1768349085.275:598): prog-id=199 op=UNLOAD Jan 14 00:04:45.350751 kernel: audit: type=1300 audit(1768349085.275:598): arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff8ad51a8 a3=0 items=0 ppid=4814 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.275000 audit[4969]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff8ad51a8 a3=0 items=0 ppid=4814 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.378290 kernel: audit: type=1327 audit(1768349085.275:598): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:04:45.275000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:04:45.275000 audit: BPF prog-id=200 op=LOAD Jan 14 00:04:45.383222 kernel: audit: type=1334 audit(1768349085.275:599): prog-id=200 op=LOAD Jan 14 00:04:45.275000 audit[4969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff8ad5088 a2=74 a3=95 items=0 ppid=4814 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.400899 kernel: audit: type=1300 audit(1768349085.275:599): arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff8ad5088 a2=74 a3=95 items=0 ppid=4814 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.275000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:04:45.425724 kernel: audit: type=1327 audit(1768349085.275:599): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:04:45.425877 kernel: audit: type=1334 audit(1768349085.276:600): prog-id=200 op=UNLOAD Jan 14 00:04:45.276000 audit: BPF prog-id=200 op=UNLOAD Jan 14 00:04:45.276000 audit[4969]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4814 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.276000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:04:45.276000 audit: BPF prog-id=201 op=LOAD Jan 14 00:04:45.276000 audit[4969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff8ad50b8 a2=40 a3=fffff8ad50e8 items=0 ppid=4814 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.276000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:04:45.276000 audit: BPF prog-id=201 op=UNLOAD Jan 14 00:04:45.276000 audit[4969]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff8ad50e8 items=0 ppid=4814 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.276000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:04:45.276000 audit: BPF prog-id=202 op=LOAD Jan 14 00:04:45.276000 audit[4970]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffcc12dc8 a2=98 a3=fffffcc12db8 items=0 ppid=4814 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.276000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:04:45.280000 audit: BPF prog-id=202 op=UNLOAD Jan 14 00:04:45.280000 audit[4970]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffffcc12d98 a3=0 items=0 ppid=4814 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.280000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:04:45.280000 audit: BPF prog-id=203 op=LOAD Jan 14 00:04:45.280000 audit[4970]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffcc12a58 a2=74 a3=95 items=0 ppid=4814 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.280000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:04:45.285000 audit: BPF prog-id=203 op=UNLOAD Jan 14 00:04:45.285000 audit[4970]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4814 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.285000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:04:45.285000 audit: BPF prog-id=204 op=LOAD Jan 14 00:04:45.285000 audit[4970]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffcc12ab8 a2=94 a3=2 items=0 ppid=4814 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.285000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:04:45.285000 audit: BPF prog-id=204 op=UNLOAD Jan 14 00:04:45.285000 audit[4970]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4814 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.285000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:04:45.393000 audit: BPF prog-id=205 op=LOAD Jan 14 00:04:45.393000 audit[4970]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffcc12a78 a2=40 a3=fffffcc12aa8 items=0 ppid=4814 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.393000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:04:45.401000 audit: BPF prog-id=205 op=UNLOAD Jan 14 00:04:45.401000 audit[4970]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffffcc12aa8 items=0 ppid=4814 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.401000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:04:45.403000 audit: BPF prog-id=206 op=LOAD Jan 14 00:04:45.403000 audit: BPF prog-id=207 op=LOAD Jan 14 00:04:45.403000 audit[4951]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=4939 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439643839353163343665623739386461383534313364373231366438 Jan 14 00:04:45.404000 audit: BPF prog-id=207 op=UNLOAD Jan 14 00:04:45.404000 audit[4951]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4939 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439643839353163343665623739386461383534313364373231366438 Jan 14 00:04:45.404000 audit: BPF prog-id=208 op=LOAD Jan 14 00:04:45.404000 audit[4951]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=4939 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439643839353163343665623739386461383534313364373231366438 Jan 14 00:04:45.404000 audit: BPF prog-id=209 op=LOAD Jan 14 00:04:45.404000 audit[4951]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=4939 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439643839353163343665623739386461383534313364373231366438 Jan 14 00:04:45.404000 audit: BPF prog-id=209 op=UNLOAD Jan 14 00:04:45.404000 audit[4951]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4939 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439643839353163343665623739386461383534313364373231366438 Jan 14 00:04:45.428000 audit: BPF prog-id=208 op=UNLOAD Jan 14 00:04:45.428000 audit[4951]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4939 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439643839353163343665623739386461383534313364373231366438 Jan 14 00:04:45.428000 audit: BPF prog-id=210 op=LOAD Jan 14 00:04:45.428000 audit[4951]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=4939 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439643839353163343665623739386461383534313364373231366438 Jan 14 00:04:45.405000 audit: BPF prog-id=211 op=LOAD Jan 14 00:04:45.405000 audit[4970]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffcc12a88 a2=94 a3=4 items=0 ppid=4814 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.405000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:04:45.430000 audit: BPF prog-id=211 op=UNLOAD Jan 14 00:04:45.430000 audit[4970]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4814 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.430000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:04:45.431000 audit: BPF prog-id=212 op=LOAD Jan 14 00:04:45.431000 audit[4970]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffcc128c8 a2=94 a3=5 items=0 ppid=4814 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.431000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:04:45.431000 audit: BPF prog-id=212 op=UNLOAD Jan 14 00:04:45.431000 audit[4970]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4814 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.431000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:04:45.432000 audit: BPF prog-id=213 op=LOAD Jan 14 00:04:45.432000 audit[4970]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffcc12af8 a2=94 a3=6 items=0 ppid=4814 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.432000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:04:45.432000 audit: BPF prog-id=213 op=UNLOAD Jan 14 00:04:45.432000 audit[4970]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4814 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.432000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:04:45.432000 audit: BPF prog-id=214 op=LOAD Jan 14 00:04:45.432000 audit[4970]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffcc122c8 a2=94 a3=83 items=0 ppid=4814 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.432000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:04:45.433000 audit: BPF prog-id=215 op=LOAD Jan 14 00:04:45.433000 audit[4970]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffffcc12088 a2=94 a3=2 items=0 ppid=4814 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.433000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:04:45.433000 audit: BPF prog-id=215 op=UNLOAD Jan 14 00:04:45.433000 audit[4970]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4814 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.433000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:04:45.433000 audit: BPF prog-id=214 op=UNLOAD Jan 14 00:04:45.433000 audit[4970]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=1038c620 a3=1037fb00 items=0 ppid=4814 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.433000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:04:45.446000 audit: BPF prog-id=216 op=LOAD Jan 14 00:04:45.446000 audit[4982]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc915d638 a2=98 a3=ffffc915d628 items=0 ppid=4814 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.446000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:04:45.446000 audit: BPF prog-id=216 op=UNLOAD Jan 14 00:04:45.446000 audit[4982]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc915d608 a3=0 items=0 ppid=4814 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.446000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:04:45.447000 audit: BPF prog-id=217 op=LOAD Jan 14 00:04:45.447000 audit[4982]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc915d4e8 a2=74 a3=95 items=0 ppid=4814 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.447000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:04:45.447000 audit: BPF prog-id=217 op=UNLOAD Jan 14 00:04:45.447000 audit[4982]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4814 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.447000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:04:45.447000 audit: BPF prog-id=218 op=LOAD Jan 14 00:04:45.447000 audit[4982]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc915d518 a2=40 a3=ffffc915d548 items=0 ppid=4814 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.447000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:04:45.447000 audit: BPF prog-id=218 op=UNLOAD Jan 14 00:04:45.447000 audit[4982]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffc915d548 items=0 ppid=4814 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.447000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:04:45.470493 containerd[2133]: time="2026-01-14T00:04:45.470453790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67f75d98f9-d4x2n,Uid:ef75aa50-7d3a-4b9a-95a7-c344bbb8239a,Namespace:calico-system,Attempt:0,} returns sandbox id \"d9d8951c46eb798da85413d7216d88df9eaed091dcf495015f779ce7ec8b12be\"" Jan 14 00:04:45.471839 containerd[2133]: time="2026-01-14T00:04:45.471814195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 00:04:45.516605 systemd-networkd[1723]: vxlan.calico: Link UP Jan 14 00:04:45.516616 systemd-networkd[1723]: vxlan.calico: Gained carrier Jan 14 00:04:45.530000 audit: BPF prog-id=219 op=LOAD Jan 14 00:04:45.530000 audit[5014]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd4996e18 a2=98 a3=ffffd4996e08 items=0 ppid=4814 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.530000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:04:45.530000 audit: BPF prog-id=219 op=UNLOAD Jan 14 00:04:45.530000 audit[5014]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd4996de8 a3=0 items=0 ppid=4814 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.530000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:04:45.530000 audit: BPF prog-id=220 op=LOAD Jan 14 00:04:45.530000 audit[5014]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd4996af8 a2=74 a3=95 items=0 ppid=4814 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.530000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:04:45.530000 audit: BPF prog-id=220 op=UNLOAD Jan 14 00:04:45.530000 audit[5014]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4814 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.530000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:04:45.530000 audit: BPF prog-id=221 op=LOAD Jan 14 00:04:45.530000 audit[5014]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd4996b58 a2=94 a3=2 items=0 ppid=4814 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.530000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:04:45.530000 audit: BPF prog-id=221 op=UNLOAD Jan 14 00:04:45.530000 audit[5014]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4814 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.530000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:04:45.530000 audit: BPF prog-id=222 op=LOAD Jan 14 00:04:45.530000 audit[5014]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd49969d8 a2=40 a3=ffffd4996a08 items=0 ppid=4814 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.530000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:04:45.530000 audit: BPF prog-id=222 op=UNLOAD Jan 14 00:04:45.530000 audit[5014]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffd4996a08 items=0 ppid=4814 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.530000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:04:45.530000 audit: BPF prog-id=223 op=LOAD Jan 14 00:04:45.530000 audit[5014]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd4996b28 a2=94 a3=b7 items=0 ppid=4814 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.530000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:04:45.530000 audit: BPF prog-id=223 op=UNLOAD Jan 14 00:04:45.530000 audit[5014]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4814 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.530000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:04:45.531000 audit: BPF prog-id=224 op=LOAD Jan 14 00:04:45.531000 audit[5014]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd49961d8 a2=94 a3=2 items=0 ppid=4814 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.531000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:04:45.531000 audit: BPF prog-id=224 op=UNLOAD Jan 14 00:04:45.531000 audit[5014]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4814 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.531000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:04:45.531000 audit: BPF prog-id=225 op=LOAD Jan 14 00:04:45.531000 audit[5014]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd4996368 a2=94 a3=30 items=0 ppid=4814 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.531000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:04:45.535000 audit: BPF prog-id=226 op=LOAD Jan 14 00:04:45.535000 audit[5017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff39d83f8 a2=98 a3=fffff39d83e8 items=0 ppid=4814 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.535000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:04:45.535000 audit: BPF prog-id=226 op=UNLOAD Jan 14 00:04:45.535000 audit[5017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff39d83c8 a3=0 items=0 ppid=4814 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.535000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:04:45.535000 audit: BPF prog-id=227 op=LOAD Jan 14 00:04:45.535000 audit[5017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff39d8088 a2=74 a3=95 items=0 ppid=4814 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.535000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:04:45.535000 audit: BPF prog-id=227 op=UNLOAD Jan 14 00:04:45.535000 audit[5017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4814 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.535000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:04:45.535000 audit: BPF prog-id=228 op=LOAD Jan 14 00:04:45.535000 audit[5017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff39d80e8 a2=94 a3=2 items=0 ppid=4814 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.535000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:04:45.536000 audit: BPF prog-id=228 op=UNLOAD Jan 14 00:04:45.536000 audit[5017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4814 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.536000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:04:45.619000 audit: BPF prog-id=229 op=LOAD Jan 14 00:04:45.619000 audit[5017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff39d80a8 a2=40 a3=fffff39d80d8 items=0 ppid=4814 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.619000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:04:45.619000 audit: BPF prog-id=229 op=UNLOAD Jan 14 00:04:45.619000 audit[5017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffff39d80d8 items=0 ppid=4814 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.619000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:04:45.626000 audit: BPF prog-id=230 op=LOAD Jan 14 00:04:45.626000 audit[5017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff39d80b8 a2=94 a3=4 items=0 ppid=4814 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.626000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:04:45.627000 audit: BPF prog-id=230 op=UNLOAD Jan 14 00:04:45.627000 audit[5017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4814 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.627000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:04:45.627000 audit: BPF prog-id=231 op=LOAD Jan 14 00:04:45.627000 audit[5017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff39d7ef8 a2=94 a3=5 items=0 ppid=4814 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.627000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:04:45.627000 audit: BPF prog-id=231 op=UNLOAD Jan 14 00:04:45.627000 audit[5017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4814 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.627000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:04:45.627000 audit: BPF prog-id=232 op=LOAD Jan 14 00:04:45.627000 audit[5017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff39d8128 a2=94 a3=6 items=0 ppid=4814 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.627000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:04:45.628000 audit: BPF prog-id=232 op=UNLOAD Jan 14 00:04:45.628000 audit[5017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4814 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.628000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:04:45.628000 audit: BPF prog-id=233 op=LOAD Jan 14 00:04:45.628000 audit[5017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff39d78f8 a2=94 a3=83 items=0 ppid=4814 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.628000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:04:45.628000 audit: BPF prog-id=234 op=LOAD Jan 14 00:04:45.628000 audit[5017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffff39d76b8 a2=94 a3=2 items=0 ppid=4814 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.628000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:04:45.628000 audit: BPF prog-id=234 op=UNLOAD Jan 14 00:04:45.628000 audit[5017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4814 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.628000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:04:45.629000 audit: BPF prog-id=233 op=UNLOAD Jan 14 00:04:45.629000 audit[5017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=3dbd9620 a3=3dbccb00 items=0 ppid=4814 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.629000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:04:45.634000 audit: BPF prog-id=225 op=UNLOAD Jan 14 00:04:45.634000 audit[4814]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000e26240 a2=0 a3=0 items=0 ppid=4797 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.634000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 00:04:45.731000 audit[5042]: NETFILTER_CFG table=nat:124 family=2 entries=15 op=nft_register_chain pid=5042 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:04:45.731000 audit[5042]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffe3d68810 a2=0 a3=ffffa35b9fa8 items=0 ppid=4814 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.731000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:04:45.735000 audit[5043]: NETFILTER_CFG table=mangle:125 family=2 entries=16 op=nft_register_chain pid=5043 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:04:45.735000 audit[5043]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffe54b81d0 a2=0 a3=ffff87729fa8 items=0 ppid=4814 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.735000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:04:45.762762 containerd[2133]: time="2026-01-14T00:04:45.762711482Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:04:45.766084 containerd[2133]: time="2026-01-14T00:04:45.766046593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 00:04:45.766184 containerd[2133]: time="2026-01-14T00:04:45.766136675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 00:04:45.766405 kubelet[3683]: E0114 00:04:45.766333 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:04:45.766903 kubelet[3683]: E0114 00:04:45.766789 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:04:45.739000 audit[5044]: NETFILTER_CFG table=filter:126 family=2 entries=94 op=nft_register_chain pid=5044 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:04:45.739000 audit[5044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffecd84eb0 a2=0 a3=ffff82afffa8 items=0 ppid=4814 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.739000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:04:45.777622 kubelet[3683]: E0114 00:04:45.777574 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2c5ec5d7d72d44948e0cbef03a4dbb30,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wdl2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67f75d98f9-d4x2n_calico-system(ef75aa50-7d3a-4b9a-95a7-c344bbb8239a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 00:04:45.780147 containerd[2133]: time="2026-01-14T00:04:45.780104460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 00:04:45.833000 audit[5041]: NETFILTER_CFG table=raw:127 family=2 entries=21 op=nft_register_chain pid=5041 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:04:45.833000 audit[5041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffddd41320 a2=0 a3=ffff819d3fa8 items=0 ppid=4814 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:45.833000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:04:46.030005 containerd[2133]: time="2026-01-14T00:04:46.029806221Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:04:46.033172 containerd[2133]: time="2026-01-14T00:04:46.033114564Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 00:04:46.033247 containerd[2133]: time="2026-01-14T00:04:46.033172773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 00:04:46.033755 kubelet[3683]: E0114 00:04:46.033531 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:04:46.033755 kubelet[3683]: E0114 00:04:46.033593 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:04:46.033864 kubelet[3683]: E0114 00:04:46.033703 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdl2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67f75d98f9-d4x2n_calico-system(ef75aa50-7d3a-4b9a-95a7-c344bbb8239a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 00:04:46.035264 kubelet[3683]: E0114 00:04:46.035224 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67f75d98f9-d4x2n" podUID="ef75aa50-7d3a-4b9a-95a7-c344bbb8239a" Jan 14 00:04:46.215341 systemd-networkd[1723]: calif37426e3ed1: Gained IPv6LL Jan 14 00:04:46.382916 kubelet[3683]: I0114 00:04:46.382800 3683 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="102726bc-c7ae-4ed2-8671-64d4ae366366" path="/var/lib/kubelet/pods/102726bc-c7ae-4ed2-8671-64d4ae366366/volumes" Jan 14 00:04:46.553074 kubelet[3683]: E0114 00:04:46.552995 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67f75d98f9-d4x2n" podUID="ef75aa50-7d3a-4b9a-95a7-c344bbb8239a" Jan 14 00:04:46.576000 audit[5057]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5057 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:46.576000 audit[5057]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff6c9af10 a2=0 a3=1 items=0 ppid=3789 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:46.576000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:46.580000 audit[5057]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=5057 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:46.580000 audit[5057]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffff6c9af10 a2=0 a3=1 items=0 ppid=3789 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:46.580000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:46.727344 systemd-networkd[1723]: vxlan.calico: Gained IPv6LL Jan 14 00:04:47.381437 containerd[2133]: time="2026-01-14T00:04:47.381393993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rp5x8,Uid:a414c0bf-06ad-47c5-a4fa-4356f045a557,Namespace:kube-system,Attempt:0,}" Jan 14 00:04:47.382139 containerd[2133]: time="2026-01-14T00:04:47.381921596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79cf4bbcf4-jhdlg,Uid:52895973-a9d8-41ff-890b-151c819ea908,Namespace:calico-system,Attempt:0,}" Jan 14 00:04:47.540522 systemd-networkd[1723]: calie88ef3d6a1a: Link UP Jan 14 00:04:47.541177 systemd-networkd[1723]: calie88ef3d6a1a: Gained carrier Jan 14 00:04:47.555790 containerd[2133]: 2026-01-14 00:04:47.443 [INFO][5060] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--d5ef04779b-k8s-calico--kube--controllers--79cf4bbcf4--jhdlg-eth0 calico-kube-controllers-79cf4bbcf4- calico-system 52895973-a9d8-41ff-890b-151c819ea908 793 0 2026-01-14 00:04:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79cf4bbcf4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547.0.0-n-d5ef04779b calico-kube-controllers-79cf4bbcf4-jhdlg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie88ef3d6a1a [] [] }} ContainerID="312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260" Namespace="calico-system" Pod="calico-kube-controllers-79cf4bbcf4-jhdlg" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--kube--controllers--79cf4bbcf4--jhdlg-" Jan 14 00:04:47.555790 containerd[2133]: 2026-01-14 00:04:47.443 [INFO][5060] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260" Namespace="calico-system" Pod="calico-kube-controllers-79cf4bbcf4-jhdlg" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--kube--controllers--79cf4bbcf4--jhdlg-eth0" Jan 14 00:04:47.555790 containerd[2133]: 2026-01-14 00:04:47.489 [INFO][5081] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260" HandleID="k8s-pod-network.312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260" Workload="ci--4547.0.0--n--d5ef04779b-k8s-calico--kube--controllers--79cf4bbcf4--jhdlg-eth0" Jan 14 00:04:47.556372 containerd[2133]: 2026-01-14 00:04:47.490 [INFO][5081] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260" HandleID="k8s-pod-network.312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260" Workload="ci--4547.0.0--n--d5ef04779b-k8s-calico--kube--controllers--79cf4bbcf4--jhdlg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-n-d5ef04779b", "pod":"calico-kube-controllers-79cf4bbcf4-jhdlg", "timestamp":"2026-01-14 00:04:47.489409938 +0000 UTC"}, Hostname:"ci-4547.0.0-n-d5ef04779b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:04:47.556372 containerd[2133]: 2026-01-14 00:04:47.491 [INFO][5081] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:04:47.556372 containerd[2133]: 2026-01-14 00:04:47.491 [INFO][5081] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:04:47.556372 containerd[2133]: 2026-01-14 00:04:47.491 [INFO][5081] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-d5ef04779b' Jan 14 00:04:47.556372 containerd[2133]: 2026-01-14 00:04:47.497 [INFO][5081] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:47.556372 containerd[2133]: 2026-01-14 00:04:47.503 [INFO][5081] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:47.556372 containerd[2133]: 2026-01-14 00:04:47.511 [INFO][5081] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:47.556372 containerd[2133]: 2026-01-14 00:04:47.514 [INFO][5081] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:47.556372 containerd[2133]: 2026-01-14 00:04:47.517 [INFO][5081] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:47.557494 containerd[2133]: 2026-01-14 00:04:47.517 [INFO][5081] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:47.557494 containerd[2133]: 2026-01-14 00:04:47.518 [INFO][5081] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260 Jan 14 00:04:47.557494 containerd[2133]: 2026-01-14 00:04:47.525 [INFO][5081] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:47.557494 containerd[2133]: 2026-01-14 00:04:47.530 [INFO][5081] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.2/26] block=192.168.70.0/26 handle="k8s-pod-network.312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:47.557494 containerd[2133]: 2026-01-14 00:04:47.530 [INFO][5081] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.2/26] handle="k8s-pod-network.312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:47.557494 containerd[2133]: 2026-01-14 00:04:47.530 [INFO][5081] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:04:47.557494 containerd[2133]: 2026-01-14 00:04:47.531 [INFO][5081] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.2/26] IPv6=[] ContainerID="312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260" HandleID="k8s-pod-network.312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260" Workload="ci--4547.0.0--n--d5ef04779b-k8s-calico--kube--controllers--79cf4bbcf4--jhdlg-eth0" Jan 14 00:04:47.557626 containerd[2133]: 2026-01-14 00:04:47.535 [INFO][5060] cni-plugin/k8s.go 418: Populated endpoint ContainerID="312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260" Namespace="calico-system" Pod="calico-kube-controllers-79cf4bbcf4-jhdlg" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--kube--controllers--79cf4bbcf4--jhdlg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--d5ef04779b-k8s-calico--kube--controllers--79cf4bbcf4--jhdlg-eth0", GenerateName:"calico-kube-controllers-79cf4bbcf4-", Namespace:"calico-system", SelfLink:"", UID:"52895973-a9d8-41ff-890b-151c819ea908", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 4, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79cf4bbcf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-d5ef04779b", ContainerID:"", Pod:"calico-kube-controllers-79cf4bbcf4-jhdlg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie88ef3d6a1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:04:47.557670 containerd[2133]: 2026-01-14 00:04:47.535 [INFO][5060] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.2/32] ContainerID="312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260" Namespace="calico-system" Pod="calico-kube-controllers-79cf4bbcf4-jhdlg" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--kube--controllers--79cf4bbcf4--jhdlg-eth0" Jan 14 00:04:47.557670 containerd[2133]: 2026-01-14 00:04:47.535 [INFO][5060] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie88ef3d6a1a ContainerID="312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260" Namespace="calico-system" Pod="calico-kube-controllers-79cf4bbcf4-jhdlg" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--kube--controllers--79cf4bbcf4--jhdlg-eth0" Jan 14 00:04:47.557670 containerd[2133]: 2026-01-14 00:04:47.541 [INFO][5060] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260" Namespace="calico-system" Pod="calico-kube-controllers-79cf4bbcf4-jhdlg" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--kube--controllers--79cf4bbcf4--jhdlg-eth0" Jan 14 00:04:47.557717 containerd[2133]: 2026-01-14 00:04:47.542 [INFO][5060] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260" Namespace="calico-system" Pod="calico-kube-controllers-79cf4bbcf4-jhdlg" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--kube--controllers--79cf4bbcf4--jhdlg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--d5ef04779b-k8s-calico--kube--controllers--79cf4bbcf4--jhdlg-eth0", GenerateName:"calico-kube-controllers-79cf4bbcf4-", Namespace:"calico-system", SelfLink:"", UID:"52895973-a9d8-41ff-890b-151c819ea908", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 4, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79cf4bbcf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-d5ef04779b", ContainerID:"312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260", Pod:"calico-kube-controllers-79cf4bbcf4-jhdlg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie88ef3d6a1a", MAC:"9e:6b:0d:4e:ae:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:04:47.557751 containerd[2133]: 2026-01-14 00:04:47.552 [INFO][5060] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260" Namespace="calico-system" Pod="calico-kube-controllers-79cf4bbcf4-jhdlg" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--kube--controllers--79cf4bbcf4--jhdlg-eth0" Jan 14 00:04:47.567000 audit[5106]: NETFILTER_CFG table=filter:130 family=2 entries=36 op=nft_register_chain pid=5106 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:04:47.567000 audit[5106]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=ffffe31ec110 a2=0 a3=ffff8c0fbfa8 items=0 ppid=4814 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:47.567000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:04:47.632597 systemd-networkd[1723]: calia3258b42d1d: Link UP Jan 14 00:04:47.634671 systemd-networkd[1723]: calia3258b42d1d: Gained carrier Jan 14 00:04:47.656363 containerd[2133]: 2026-01-14 00:04:47.483 [INFO][5064] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--rp5x8-eth0 coredns-668d6bf9bc- kube-system a414c0bf-06ad-47c5-a4fa-4356f045a557 799 0 2026-01-14 00:04:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-n-d5ef04779b coredns-668d6bf9bc-rp5x8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia3258b42d1d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-rp5x8" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--rp5x8-" Jan 14 00:04:47.656363 containerd[2133]: 2026-01-14 00:04:47.484 [INFO][5064] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-rp5x8" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--rp5x8-eth0" Jan 14 00:04:47.656363 containerd[2133]: 2026-01-14 00:04:47.517 [INFO][5092] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3" HandleID="k8s-pod-network.bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3" Workload="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--rp5x8-eth0" Jan 14 00:04:47.656875 containerd[2133]: 2026-01-14 00:04:47.518 [INFO][5092] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3" HandleID="k8s-pod-network.bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3" Workload="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--rp5x8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b6c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-n-d5ef04779b", "pod":"coredns-668d6bf9bc-rp5x8", "timestamp":"2026-01-14 00:04:47.517768467 +0000 UTC"}, Hostname:"ci-4547.0.0-n-d5ef04779b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:04:47.656875 containerd[2133]: 2026-01-14 00:04:47.518 [INFO][5092] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:04:47.656875 containerd[2133]: 2026-01-14 00:04:47.530 [INFO][5092] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:04:47.656875 containerd[2133]: 2026-01-14 00:04:47.531 [INFO][5092] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-d5ef04779b' Jan 14 00:04:47.656875 containerd[2133]: 2026-01-14 00:04:47.598 [INFO][5092] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:47.656875 containerd[2133]: 2026-01-14 00:04:47.604 [INFO][5092] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:47.656875 containerd[2133]: 2026-01-14 00:04:47.608 [INFO][5092] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:47.656875 containerd[2133]: 2026-01-14 00:04:47.610 [INFO][5092] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:47.656875 containerd[2133]: 2026-01-14 00:04:47.612 [INFO][5092] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:47.657311 containerd[2133]: 2026-01-14 00:04:47.612 [INFO][5092] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:47.657311 containerd[2133]: 2026-01-14 00:04:47.614 [INFO][5092] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3 Jan 14 00:04:47.657311 containerd[2133]: 2026-01-14 00:04:47.618 [INFO][5092] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:47.657311 containerd[2133]: 2026-01-14 00:04:47.626 [INFO][5092] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.3/26] block=192.168.70.0/26 handle="k8s-pod-network.bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:47.657311 containerd[2133]: 2026-01-14 00:04:47.626 [INFO][5092] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.3/26] handle="k8s-pod-network.bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:47.657311 containerd[2133]: 2026-01-14 00:04:47.626 [INFO][5092] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:04:47.657311 containerd[2133]: 2026-01-14 00:04:47.626 [INFO][5092] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.3/26] IPv6=[] ContainerID="bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3" HandleID="k8s-pod-network.bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3" Workload="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--rp5x8-eth0" Jan 14 00:04:47.657430 containerd[2133]: 2026-01-14 00:04:47.628 [INFO][5064] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-rp5x8" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--rp5x8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--rp5x8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a414c0bf-06ad-47c5-a4fa-4356f045a557", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 4, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-d5ef04779b", ContainerID:"", Pod:"coredns-668d6bf9bc-rp5x8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3258b42d1d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:04:47.657430 containerd[2133]: 2026-01-14 00:04:47.629 [INFO][5064] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.3/32] ContainerID="bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-rp5x8" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--rp5x8-eth0" Jan 14 00:04:47.657430 containerd[2133]: 2026-01-14 00:04:47.629 [INFO][5064] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3258b42d1d ContainerID="bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-rp5x8" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--rp5x8-eth0" Jan 14 00:04:47.657430 containerd[2133]: 2026-01-14 00:04:47.635 [INFO][5064] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-rp5x8" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--rp5x8-eth0" Jan 14 00:04:47.657430 containerd[2133]: 2026-01-14 00:04:47.635 [INFO][5064] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-rp5x8" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--rp5x8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--rp5x8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a414c0bf-06ad-47c5-a4fa-4356f045a557", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 4, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-d5ef04779b", ContainerID:"bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3", Pod:"coredns-668d6bf9bc-rp5x8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3258b42d1d", MAC:"8e:33:53:40:c7:fd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:04:47.657430 containerd[2133]: 2026-01-14 00:04:47.646 [INFO][5064] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-rp5x8" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--rp5x8-eth0" Jan 14 00:04:47.663000 audit[5115]: NETFILTER_CFG table=filter:131 family=2 entries=46 op=nft_register_chain pid=5115 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:04:47.663000 audit[5115]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23740 a0=3 a1=fffffbad1000 a2=0 a3=ffffa2bc9fa8 items=0 ppid=4814 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:47.663000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:04:48.227651 containerd[2133]: time="2026-01-14T00:04:48.227565230Z" level=info msg="connecting to shim 312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260" address="unix:///run/containerd/s/53e85b4d90c85b09b69ecd42e720aa5467c0ea2650832f3e8e89088cfac9417e" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:04:48.229018 containerd[2133]: time="2026-01-14T00:04:48.228980948Z" level=info msg="connecting to shim bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3" address="unix:///run/containerd/s/38e10ab7713c5cf12b9203ec9d685d23556cac8475703fe943148510c8ccf19a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:04:48.253450 systemd[1]: Started cri-containerd-312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260.scope - libcontainer container 312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260. Jan 14 00:04:48.262429 systemd[1]: Started cri-containerd-bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3.scope - libcontainer container bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3. Jan 14 00:04:48.268000 audit: BPF prog-id=235 op=LOAD Jan 14 00:04:48.268000 audit: BPF prog-id=236 op=LOAD Jan 14 00:04:48.268000 audit[5155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5130 pid=5155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331323331336664373961633261353336653630376366653632613361 Jan 14 00:04:48.268000 audit: BPF prog-id=236 op=UNLOAD Jan 14 00:04:48.268000 audit[5155]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5130 pid=5155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331323331336664373961633261353336653630376366653632613361 Jan 14 00:04:48.268000 audit: BPF prog-id=237 op=LOAD Jan 14 00:04:48.268000 audit[5155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5130 pid=5155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331323331336664373961633261353336653630376366653632613361 Jan 14 00:04:48.268000 audit: BPF prog-id=238 op=LOAD Jan 14 00:04:48.268000 audit[5155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5130 pid=5155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331323331336664373961633261353336653630376366653632613361 Jan 14 00:04:48.268000 audit: BPF prog-id=238 op=UNLOAD Jan 14 00:04:48.268000 audit[5155]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5130 pid=5155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331323331336664373961633261353336653630376366653632613361 Jan 14 00:04:48.268000 audit: BPF prog-id=237 op=UNLOAD Jan 14 00:04:48.268000 audit[5155]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5130 pid=5155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331323331336664373961633261353336653630376366653632613361 Jan 14 00:04:48.269000 audit: BPF prog-id=239 op=LOAD Jan 14 00:04:48.269000 audit[5155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5130 pid=5155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331323331336664373961633261353336653630376366653632613361 Jan 14 00:04:48.274000 audit: BPF prog-id=240 op=LOAD Jan 14 00:04:48.274000 audit: BPF prog-id=241 op=LOAD Jan 14 00:04:48.274000 audit[5165]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=5134 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.274000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266633163316235343763306263396664316337376630303830356363 Jan 14 00:04:48.276000 audit: BPF prog-id=241 op=UNLOAD Jan 14 00:04:48.276000 audit[5165]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5134 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266633163316235343763306263396664316337376630303830356363 Jan 14 00:04:48.276000 audit: BPF prog-id=242 op=LOAD Jan 14 00:04:48.276000 audit[5165]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=5134 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266633163316235343763306263396664316337376630303830356363 Jan 14 00:04:48.276000 audit: BPF prog-id=243 op=LOAD Jan 14 00:04:48.276000 audit[5165]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=5134 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266633163316235343763306263396664316337376630303830356363 Jan 14 00:04:48.276000 audit: BPF prog-id=243 op=UNLOAD Jan 14 00:04:48.276000 audit[5165]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5134 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266633163316235343763306263396664316337376630303830356363 Jan 14 00:04:48.277000 audit: BPF prog-id=242 op=UNLOAD Jan 14 00:04:48.277000 audit[5165]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5134 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.277000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266633163316235343763306263396664316337376630303830356363 Jan 14 00:04:48.277000 audit: BPF prog-id=244 op=LOAD Jan 14 00:04:48.277000 audit[5165]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=5134 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.277000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266633163316235343763306263396664316337376630303830356363 Jan 14 00:04:48.306753 containerd[2133]: time="2026-01-14T00:04:48.306633593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79cf4bbcf4-jhdlg,Uid:52895973-a9d8-41ff-890b-151c819ea908,Namespace:calico-system,Attempt:0,} returns sandbox id \"312313fd79ac2a536e607cfe62a3a363684f523c6e7f0634a890a02a938de260\"" Jan 14 00:04:48.309869 containerd[2133]: time="2026-01-14T00:04:48.309835149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 00:04:48.316717 containerd[2133]: time="2026-01-14T00:04:48.316628373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rp5x8,Uid:a414c0bf-06ad-47c5-a4fa-4356f045a557,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3\"" Jan 14 00:04:48.320673 containerd[2133]: time="2026-01-14T00:04:48.320636554Z" level=info msg="CreateContainer within sandbox \"bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 00:04:48.344338 containerd[2133]: time="2026-01-14T00:04:48.344289231Z" level=info msg="Container e0366bd59aba7df9c981a5a7608ebd9b436cc4317b2c359507192a9393a23d56: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:04:48.357131 containerd[2133]: time="2026-01-14T00:04:48.357062494Z" level=info msg="CreateContainer within sandbox \"bfc1c1b547c0bc9fd1c77f00805ccf851a705eebc2c5d2746dfcad6e2ec702f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e0366bd59aba7df9c981a5a7608ebd9b436cc4317b2c359507192a9393a23d56\"" Jan 14 00:04:48.358050 containerd[2133]: time="2026-01-14T00:04:48.358016506Z" level=info msg="StartContainer for \"e0366bd59aba7df9c981a5a7608ebd9b436cc4317b2c359507192a9393a23d56\"" Jan 14 00:04:48.359582 containerd[2133]: time="2026-01-14T00:04:48.359550755Z" level=info msg="connecting to shim e0366bd59aba7df9c981a5a7608ebd9b436cc4317b2c359507192a9393a23d56" address="unix:///run/containerd/s/38e10ab7713c5cf12b9203ec9d685d23556cac8475703fe943148510c8ccf19a" protocol=ttrpc version=3 Jan 14 00:04:48.379373 systemd[1]: Started cri-containerd-e0366bd59aba7df9c981a5a7608ebd9b436cc4317b2c359507192a9393a23d56.scope - libcontainer container e0366bd59aba7df9c981a5a7608ebd9b436cc4317b2c359507192a9393a23d56. Jan 14 00:04:48.381729 containerd[2133]: time="2026-01-14T00:04:48.381679384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgz55,Uid:f0601279-098f-420b-84a8-b4028d2c0ea2,Namespace:calico-system,Attempt:0,}" Jan 14 00:04:48.393000 audit: BPF prog-id=245 op=LOAD Jan 14 00:04:48.394000 audit: BPF prog-id=246 op=LOAD Jan 14 00:04:48.394000 audit[5207]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5134 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333636626435396162613764663963393831613561373630386562 Jan 14 00:04:48.396000 audit: BPF prog-id=246 op=UNLOAD Jan 14 00:04:48.396000 audit[5207]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5134 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333636626435396162613764663963393831613561373630386562 Jan 14 00:04:48.396000 audit: BPF prog-id=247 op=LOAD Jan 14 00:04:48.396000 audit[5207]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5134 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333636626435396162613764663963393831613561373630386562 Jan 14 00:04:48.396000 audit: BPF prog-id=248 op=LOAD Jan 14 00:04:48.396000 audit[5207]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5134 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333636626435396162613764663963393831613561373630386562 Jan 14 00:04:48.396000 audit: BPF prog-id=248 op=UNLOAD Jan 14 00:04:48.396000 audit[5207]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5134 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333636626435396162613764663963393831613561373630386562 Jan 14 00:04:48.396000 audit: BPF prog-id=247 op=UNLOAD Jan 14 00:04:48.396000 audit[5207]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5134 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333636626435396162613764663963393831613561373630386562 Jan 14 00:04:48.396000 audit: BPF prog-id=249 op=LOAD Jan 14 00:04:48.396000 audit[5207]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5134 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333636626435396162613764663963393831613561373630386562 Jan 14 00:04:48.427600 containerd[2133]: time="2026-01-14T00:04:48.427465450Z" level=info msg="StartContainer for \"e0366bd59aba7df9c981a5a7608ebd9b436cc4317b2c359507192a9393a23d56\" returns successfully" Jan 14 00:04:48.516414 systemd-networkd[1723]: cali871f4a95d4a: Link UP Jan 14 00:04:48.517618 systemd-networkd[1723]: cali871f4a95d4a: Gained carrier Jan 14 00:04:48.537981 containerd[2133]: 2026-01-14 00:04:48.459 [INFO][5237] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--d5ef04779b-k8s-csi--node--driver--hgz55-eth0 csi-node-driver- calico-system f0601279-098f-420b-84a8-b4028d2c0ea2 688 0 2026-01-14 00:04:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547.0.0-n-d5ef04779b csi-node-driver-hgz55 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali871f4a95d4a [] [] }} ContainerID="1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b" Namespace="calico-system" Pod="csi-node-driver-hgz55" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-csi--node--driver--hgz55-" Jan 14 00:04:48.537981 containerd[2133]: 2026-01-14 00:04:48.459 [INFO][5237] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b" Namespace="calico-system" Pod="csi-node-driver-hgz55" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-csi--node--driver--hgz55-eth0" Jan 14 00:04:48.537981 containerd[2133]: 2026-01-14 00:04:48.480 [INFO][5252] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b" HandleID="k8s-pod-network.1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b" Workload="ci--4547.0.0--n--d5ef04779b-k8s-csi--node--driver--hgz55-eth0" Jan 14 00:04:48.537981 containerd[2133]: 2026-01-14 00:04:48.480 [INFO][5252] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b" HandleID="k8s-pod-network.1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b" Workload="ci--4547.0.0--n--d5ef04779b-k8s-csi--node--driver--hgz55-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-n-d5ef04779b", "pod":"csi-node-driver-hgz55", "timestamp":"2026-01-14 00:04:48.480553303 +0000 UTC"}, Hostname:"ci-4547.0.0-n-d5ef04779b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:04:48.537981 containerd[2133]: 2026-01-14 00:04:48.480 [INFO][5252] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:04:48.537981 containerd[2133]: 2026-01-14 00:04:48.480 [INFO][5252] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:04:48.537981 containerd[2133]: 2026-01-14 00:04:48.480 [INFO][5252] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-d5ef04779b' Jan 14 00:04:48.537981 containerd[2133]: 2026-01-14 00:04:48.486 [INFO][5252] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:48.537981 containerd[2133]: 2026-01-14 00:04:48.490 [INFO][5252] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:48.537981 containerd[2133]: 2026-01-14 00:04:48.493 [INFO][5252] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:48.537981 containerd[2133]: 2026-01-14 00:04:48.495 [INFO][5252] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:48.537981 containerd[2133]: 2026-01-14 00:04:48.497 [INFO][5252] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:48.537981 containerd[2133]: 2026-01-14 00:04:48.497 [INFO][5252] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:48.537981 containerd[2133]: 2026-01-14 00:04:48.499 [INFO][5252] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b Jan 14 00:04:48.537981 containerd[2133]: 2026-01-14 00:04:48.505 [INFO][5252] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:48.537981 containerd[2133]: 2026-01-14 00:04:48.510 [INFO][5252] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.4/26] block=192.168.70.0/26 handle="k8s-pod-network.1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:48.537981 containerd[2133]: 2026-01-14 00:04:48.510 [INFO][5252] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.4/26] handle="k8s-pod-network.1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:48.537981 containerd[2133]: 2026-01-14 00:04:48.510 [INFO][5252] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:04:48.537981 containerd[2133]: 2026-01-14 00:04:48.510 [INFO][5252] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.4/26] IPv6=[] ContainerID="1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b" HandleID="k8s-pod-network.1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b" Workload="ci--4547.0.0--n--d5ef04779b-k8s-csi--node--driver--hgz55-eth0" Jan 14 00:04:48.538642 containerd[2133]: 2026-01-14 00:04:48.512 [INFO][5237] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b" Namespace="calico-system" Pod="csi-node-driver-hgz55" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-csi--node--driver--hgz55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--d5ef04779b-k8s-csi--node--driver--hgz55-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f0601279-098f-420b-84a8-b4028d2c0ea2", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 4, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-d5ef04779b", ContainerID:"", Pod:"csi-node-driver-hgz55", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali871f4a95d4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:04:48.538642 containerd[2133]: 2026-01-14 00:04:48.512 [INFO][5237] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.4/32] ContainerID="1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b" Namespace="calico-system" Pod="csi-node-driver-hgz55" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-csi--node--driver--hgz55-eth0" Jan 14 00:04:48.538642 containerd[2133]: 2026-01-14 00:04:48.512 [INFO][5237] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali871f4a95d4a ContainerID="1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b" Namespace="calico-system" Pod="csi-node-driver-hgz55" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-csi--node--driver--hgz55-eth0" Jan 14 00:04:48.538642 containerd[2133]: 2026-01-14 00:04:48.517 [INFO][5237] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b" Namespace="calico-system" Pod="csi-node-driver-hgz55" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-csi--node--driver--hgz55-eth0" Jan 14 00:04:48.538642 containerd[2133]: 2026-01-14 00:04:48.519 [INFO][5237] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b" Namespace="calico-system" Pod="csi-node-driver-hgz55" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-csi--node--driver--hgz55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--d5ef04779b-k8s-csi--node--driver--hgz55-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f0601279-098f-420b-84a8-b4028d2c0ea2", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 4, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-d5ef04779b", ContainerID:"1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b", Pod:"csi-node-driver-hgz55", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali871f4a95d4a", MAC:"72:89:ae:df:a7:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:04:48.538642 containerd[2133]: 2026-01-14 00:04:48.533 [INFO][5237] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b" Namespace="calico-system" Pod="csi-node-driver-hgz55" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-csi--node--driver--hgz55-eth0" Jan 14 00:04:48.548000 audit[5266]: NETFILTER_CFG table=filter:132 family=2 entries=44 op=nft_register_chain pid=5266 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:04:48.548000 audit[5266]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21952 a0=3 a1=fffff01c75b0 a2=0 a3=ffffa91b6fa8 items=0 ppid=4814 pid=5266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.548000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:04:48.581409 kubelet[3683]: I0114 00:04:48.581266 3683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rp5x8" podStartSLOduration=40.581219588 podStartE2EDuration="40.581219588s" podCreationTimestamp="2026-01-14 00:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:04:48.579731453 +0000 UTC m=+46.285242917" watchObservedRunningTime="2026-01-14 00:04:48.581219588 +0000 UTC m=+46.286731052" Jan 14 00:04:48.587710 containerd[2133]: time="2026-01-14T00:04:48.587390695Z" level=info msg="connecting to shim 1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b" address="unix:///run/containerd/s/b0f50428e7ed18f7aa1beae91099a389bfec1582666797f0d291ad21bdc506b8" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:04:48.589348 containerd[2133]: time="2026-01-14T00:04:48.589311480Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:04:48.592824 containerd[2133]: time="2026-01-14T00:04:48.592774929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 00:04:48.592999 containerd[2133]: time="2026-01-14T00:04:48.592844779Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 00:04:48.593630 kubelet[3683]: E0114 00:04:48.593392 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:04:48.594125 kubelet[3683]: E0114 00:04:48.593961 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:04:48.596438 kubelet[3683]: E0114 00:04:48.596244 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ttqb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79cf4bbcf4-jhdlg_calico-system(52895973-a9d8-41ff-890b-151c819ea908): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 00:04:48.599764 kubelet[3683]: E0114 00:04:48.599613 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79cf4bbcf4-jhdlg" podUID="52895973-a9d8-41ff-890b-151c819ea908" Jan 14 00:04:48.624000 audit[5301]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=5301 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:48.624000 audit[5301]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcc08f100 a2=0 a3=1 items=0 ppid=3789 pid=5301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.624000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:48.629418 systemd[1]: Started cri-containerd-1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b.scope - libcontainer container 1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b. Jan 14 00:04:48.632000 audit[5301]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=5301 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:48.632000 audit[5301]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffcc08f100 a2=0 a3=1 items=0 ppid=3789 pid=5301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.632000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:48.639000 audit: BPF prog-id=250 op=LOAD Jan 14 00:04:48.640000 audit: BPF prog-id=251 op=LOAD Jan 14 00:04:48.640000 audit[5289]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000220180 a2=98 a3=0 items=0 ppid=5276 pid=5289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161336363373838306336643932653461636633633633633163663231 Jan 14 00:04:48.640000 audit: BPF prog-id=251 op=UNLOAD Jan 14 00:04:48.640000 audit[5289]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5276 pid=5289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161336363373838306336643932653461636633633633633163663231 Jan 14 00:04:48.640000 audit: BPF prog-id=252 op=LOAD Jan 14 00:04:48.640000 audit[5289]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40002203e8 a2=98 a3=0 items=0 ppid=5276 pid=5289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161336363373838306336643932653461636633633633633163663231 Jan 14 00:04:48.640000 audit: BPF prog-id=253 op=LOAD Jan 14 00:04:48.640000 audit[5289]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000220168 a2=98 a3=0 items=0 ppid=5276 pid=5289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161336363373838306336643932653461636633633633633163663231 Jan 14 00:04:48.640000 audit: BPF prog-id=253 op=UNLOAD Jan 14 00:04:48.640000 audit[5289]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5276 pid=5289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161336363373838306336643932653461636633633633633163663231 Jan 14 00:04:48.640000 audit: BPF prog-id=252 op=UNLOAD Jan 14 00:04:48.640000 audit[5289]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5276 pid=5289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161336363373838306336643932653461636633633633633163663231 Jan 14 00:04:48.640000 audit: BPF prog-id=254 op=LOAD Jan 14 00:04:48.640000 audit[5289]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000220648 a2=98 a3=0 items=0 ppid=5276 pid=5289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161336363373838306336643932653461636633633633633163663231 Jan 14 00:04:48.649000 audit[5310]: NETFILTER_CFG table=filter:135 family=2 entries=17 op=nft_register_rule pid=5310 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:48.649000 audit[5310]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff246f4d0 a2=0 a3=1 items=0 ppid=3789 pid=5310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.649000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:48.653000 audit[5310]: NETFILTER_CFG table=nat:136 family=2 entries=35 op=nft_register_chain pid=5310 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:48.653000 audit[5310]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffff246f4d0 a2=0 a3=1 items=0 ppid=3789 pid=5310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:48.653000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:48.661923 containerd[2133]: time="2026-01-14T00:04:48.661881426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgz55,Uid:f0601279-098f-420b-84a8-b4028d2c0ea2,Namespace:calico-system,Attempt:0,} returns sandbox id \"1a3cc7880c6d92e4acf3c63c1cf21e8cd2c461022d7681c0243e0759ef5dc82b\"" Jan 14 00:04:48.664378 containerd[2133]: time="2026-01-14T00:04:48.663973510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 00:04:48.920131 containerd[2133]: time="2026-01-14T00:04:48.920041808Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:04:48.924240 containerd[2133]: time="2026-01-14T00:04:48.924118303Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 00:04:48.924240 containerd[2133]: time="2026-01-14T00:04:48.924173856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 00:04:48.924406 kubelet[3683]: E0114 00:04:48.924359 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:04:48.924476 kubelet[3683]: E0114 00:04:48.924411 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:04:48.924559 kubelet[3683]: E0114 00:04:48.924514 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hhqbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hgz55_calico-system(f0601279-098f-420b-84a8-b4028d2c0ea2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 00:04:48.926997 containerd[2133]: time="2026-01-14T00:04:48.926946171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 00:04:49.200958 containerd[2133]: time="2026-01-14T00:04:49.200821607Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:04:49.204382 containerd[2133]: time="2026-01-14T00:04:49.204271696Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 00:04:49.204382 containerd[2133]: time="2026-01-14T00:04:49.204323905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 00:04:49.204983 kubelet[3683]: E0114 00:04:49.204614 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:04:49.204983 kubelet[3683]: E0114 00:04:49.204678 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:04:49.204983 kubelet[3683]: E0114 00:04:49.204782 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hhqbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hgz55_calico-system(f0601279-098f-420b-84a8-b4028d2c0ea2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 00:04:49.206234 kubelet[3683]: E0114 00:04:49.206170 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgz55" podUID="f0601279-098f-420b-84a8-b4028d2c0ea2" Jan 14 00:04:49.351382 systemd-networkd[1723]: calie88ef3d6a1a: Gained IPv6LL Jan 14 00:04:49.381608 containerd[2133]: time="2026-01-14T00:04:49.381568702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766dfc88bb-vxthm,Uid:23297b6d-ba28-4d3e-a11a-c82aeab97bbe,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:04:49.497519 systemd-networkd[1723]: cali460d285620e: Link UP Jan 14 00:04:49.500811 systemd-networkd[1723]: cali460d285620e: Gained carrier Jan 14 00:04:49.518445 containerd[2133]: 2026-01-14 00:04:49.439 [INFO][5317] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--vxthm-eth0 calico-apiserver-766dfc88bb- calico-apiserver 23297b6d-ba28-4d3e-a11a-c82aeab97bbe 797 0 2026-01-14 00:04:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:766dfc88bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-n-d5ef04779b calico-apiserver-766dfc88bb-vxthm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali460d285620e [] [] }} ContainerID="04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5" Namespace="calico-apiserver" Pod="calico-apiserver-766dfc88bb-vxthm" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--vxthm-" Jan 14 00:04:49.518445 containerd[2133]: 2026-01-14 00:04:49.439 [INFO][5317] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5" Namespace="calico-apiserver" Pod="calico-apiserver-766dfc88bb-vxthm" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--vxthm-eth0" Jan 14 00:04:49.518445 containerd[2133]: 2026-01-14 00:04:49.458 [INFO][5328] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5" HandleID="k8s-pod-network.04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5" Workload="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--vxthm-eth0" Jan 14 00:04:49.518445 containerd[2133]: 2026-01-14 00:04:49.458 [INFO][5328] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5" HandleID="k8s-pod-network.04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5" Workload="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--vxthm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-n-d5ef04779b", "pod":"calico-apiserver-766dfc88bb-vxthm", "timestamp":"2026-01-14 00:04:49.458090827 +0000 UTC"}, Hostname:"ci-4547.0.0-n-d5ef04779b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:04:49.518445 containerd[2133]: 2026-01-14 00:04:49.458 [INFO][5328] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:04:49.518445 containerd[2133]: 2026-01-14 00:04:49.458 [INFO][5328] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:04:49.518445 containerd[2133]: 2026-01-14 00:04:49.458 [INFO][5328] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-d5ef04779b' Jan 14 00:04:49.518445 containerd[2133]: 2026-01-14 00:04:49.467 [INFO][5328] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:49.518445 containerd[2133]: 2026-01-14 00:04:49.471 [INFO][5328] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:49.518445 containerd[2133]: 2026-01-14 00:04:49.474 [INFO][5328] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:49.518445 containerd[2133]: 2026-01-14 00:04:49.476 [INFO][5328] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:49.518445 containerd[2133]: 2026-01-14 00:04:49.478 [INFO][5328] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:49.518445 containerd[2133]: 2026-01-14 00:04:49.478 [INFO][5328] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:49.518445 containerd[2133]: 2026-01-14 00:04:49.479 [INFO][5328] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5 Jan 14 00:04:49.518445 containerd[2133]: 2026-01-14 00:04:49.484 [INFO][5328] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:49.518445 containerd[2133]: 2026-01-14 00:04:49.492 [INFO][5328] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.5/26] block=192.168.70.0/26 handle="k8s-pod-network.04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:49.518445 containerd[2133]: 2026-01-14 00:04:49.492 [INFO][5328] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.5/26] handle="k8s-pod-network.04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:49.518445 containerd[2133]: 2026-01-14 00:04:49.492 [INFO][5328] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:04:49.518445 containerd[2133]: 2026-01-14 00:04:49.492 [INFO][5328] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.5/26] IPv6=[] ContainerID="04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5" HandleID="k8s-pod-network.04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5" Workload="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--vxthm-eth0" Jan 14 00:04:49.519961 containerd[2133]: 2026-01-14 00:04:49.495 [INFO][5317] cni-plugin/k8s.go 418: Populated endpoint ContainerID="04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5" Namespace="calico-apiserver" Pod="calico-apiserver-766dfc88bb-vxthm" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--vxthm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--vxthm-eth0", GenerateName:"calico-apiserver-766dfc88bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"23297b6d-ba28-4d3e-a11a-c82aeab97bbe", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"766dfc88bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-d5ef04779b", ContainerID:"", Pod:"calico-apiserver-766dfc88bb-vxthm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali460d285620e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:04:49.519961 containerd[2133]: 2026-01-14 00:04:49.495 [INFO][5317] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.5/32] ContainerID="04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5" Namespace="calico-apiserver" Pod="calico-apiserver-766dfc88bb-vxthm" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--vxthm-eth0" Jan 14 00:04:49.519961 containerd[2133]: 2026-01-14 00:04:49.495 [INFO][5317] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali460d285620e ContainerID="04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5" Namespace="calico-apiserver" Pod="calico-apiserver-766dfc88bb-vxthm" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--vxthm-eth0" Jan 14 00:04:49.519961 containerd[2133]: 2026-01-14 00:04:49.501 [INFO][5317] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5" Namespace="calico-apiserver" Pod="calico-apiserver-766dfc88bb-vxthm" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--vxthm-eth0" Jan 14 00:04:49.519961 containerd[2133]: 2026-01-14 00:04:49.502 [INFO][5317] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5" Namespace="calico-apiserver" Pod="calico-apiserver-766dfc88bb-vxthm" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--vxthm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--vxthm-eth0", GenerateName:"calico-apiserver-766dfc88bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"23297b6d-ba28-4d3e-a11a-c82aeab97bbe", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"766dfc88bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-d5ef04779b", ContainerID:"04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5", Pod:"calico-apiserver-766dfc88bb-vxthm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali460d285620e", MAC:"66:41:ef:7e:82:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:04:49.519961 containerd[2133]: 2026-01-14 00:04:49.514 [INFO][5317] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5" Namespace="calico-apiserver" Pod="calico-apiserver-766dfc88bb-vxthm" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--vxthm-eth0" Jan 14 00:04:49.528000 audit[5343]: NETFILTER_CFG table=filter:137 family=2 entries=62 op=nft_register_chain pid=5343 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:04:49.528000 audit[5343]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=31772 a0=3 a1=ffffc3b29260 a2=0 a3=ffffa7f84fa8 items=0 ppid=4814 pid=5343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:49.528000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:04:49.543598 systemd-networkd[1723]: calia3258b42d1d: Gained IPv6LL Jan 14 00:04:49.565596 containerd[2133]: time="2026-01-14T00:04:49.565303387Z" level=info msg="connecting to shim 04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5" address="unix:///run/containerd/s/38724ea0e4a3dab2eee329c7388865f0bcb9e35e58dc030966bdd0054c63a58c" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:04:49.569538 kubelet[3683]: E0114 00:04:49.569485 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79cf4bbcf4-jhdlg" podUID="52895973-a9d8-41ff-890b-151c819ea908" Jan 14 00:04:49.569894 kubelet[3683]: E0114 00:04:49.569861 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgz55" podUID="f0601279-098f-420b-84a8-b4028d2c0ea2" Jan 14 00:04:49.603621 systemd[1]: Started cri-containerd-04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5.scope - libcontainer container 04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5. Jan 14 00:04:49.620000 audit: BPF prog-id=255 op=LOAD Jan 14 00:04:49.621000 audit: BPF prog-id=256 op=LOAD Jan 14 00:04:49.621000 audit[5364]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5352 pid=5364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:49.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034643965303866336434613535336633633130656564323765313531 Jan 14 00:04:49.621000 audit: BPF prog-id=256 op=UNLOAD Jan 14 00:04:49.621000 audit[5364]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5352 pid=5364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:49.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034643965303866336434613535336633633130656564323765313531 Jan 14 00:04:49.621000 audit: BPF prog-id=257 op=LOAD Jan 14 00:04:49.621000 audit[5364]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5352 pid=5364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:49.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034643965303866336434613535336633633130656564323765313531 Jan 14 00:04:49.621000 audit: BPF prog-id=258 op=LOAD Jan 14 00:04:49.621000 audit[5364]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5352 pid=5364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:49.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034643965303866336434613535336633633130656564323765313531 Jan 14 00:04:49.622000 audit: BPF prog-id=258 op=UNLOAD Jan 14 00:04:49.622000 audit[5364]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5352 pid=5364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:49.622000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034643965303866336434613535336633633130656564323765313531 Jan 14 00:04:49.622000 audit: BPF prog-id=257 op=UNLOAD Jan 14 00:04:49.622000 audit[5364]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5352 pid=5364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:49.622000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034643965303866336434613535336633633130656564323765313531 Jan 14 00:04:49.622000 audit: BPF prog-id=259 op=LOAD Jan 14 00:04:49.622000 audit[5364]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5352 pid=5364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:49.622000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034643965303866336434613535336633633130656564323765313531 Jan 14 00:04:49.646330 containerd[2133]: time="2026-01-14T00:04:49.646288351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766dfc88bb-vxthm,Uid:23297b6d-ba28-4d3e-a11a-c82aeab97bbe,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"04d9e08f3d4a553f3c10eed27e151bdad75ef0ea8a335e2cd0a1b630925c33e5\"" Jan 14 00:04:49.648129 containerd[2133]: time="2026-01-14T00:04:49.648082333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:04:49.907401 containerd[2133]: time="2026-01-14T00:04:49.907213129Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:04:49.913265 containerd[2133]: time="2026-01-14T00:04:49.913224968Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:04:49.913503 containerd[2133]: time="2026-01-14T00:04:49.913278642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:04:49.913675 kubelet[3683]: E0114 00:04:49.913631 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:04:49.914083 kubelet[3683]: E0114 00:04:49.913680 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:04:49.914083 kubelet[3683]: E0114 00:04:49.913820 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxtgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-766dfc88bb-vxthm_calico-apiserver(23297b6d-ba28-4d3e-a11a-c82aeab97bbe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:04:49.915079 kubelet[3683]: E0114 00:04:49.915050 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-vxthm" podUID="23297b6d-ba28-4d3e-a11a-c82aeab97bbe" Jan 14 00:04:50.383265 containerd[2133]: time="2026-01-14T00:04:50.382960627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6zdb5,Uid:804e2c94-fdab-4f15-b317-31c3221bea29,Namespace:kube-system,Attempt:0,}" Jan 14 00:04:50.383620 containerd[2133]: time="2026-01-14T00:04:50.383530343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766dfc88bb-t7bd6,Uid:52b59e1e-92be-4298-96f1-1e43387d21fa,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:04:50.383620 containerd[2133]: time="2026-01-14T00:04:50.383591313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pdm6c,Uid:dd23407e-e7fa-43bd-b827-67d8fab88d3b,Namespace:calico-system,Attempt:0,}" Jan 14 00:04:50.503574 systemd-networkd[1723]: cali871f4a95d4a: Gained IPv6LL Jan 14 00:04:50.546105 systemd-networkd[1723]: cali651251c146d: Link UP Jan 14 00:04:50.547945 systemd-networkd[1723]: cali651251c146d: Gained carrier Jan 14 00:04:50.568375 containerd[2133]: 2026-01-14 00:04:50.463 [INFO][5390] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--6zdb5-eth0 coredns-668d6bf9bc- kube-system 804e2c94-fdab-4f15-b317-31c3221bea29 788 0 2026-01-14 00:04:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-n-d5ef04779b coredns-668d6bf9bc-6zdb5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali651251c146d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8" Namespace="kube-system" Pod="coredns-668d6bf9bc-6zdb5" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--6zdb5-" Jan 14 00:04:50.568375 containerd[2133]: 2026-01-14 00:04:50.463 [INFO][5390] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8" Namespace="kube-system" Pod="coredns-668d6bf9bc-6zdb5" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--6zdb5-eth0" Jan 14 00:04:50.568375 containerd[2133]: 2026-01-14 00:04:50.492 [INFO][5430] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8" HandleID="k8s-pod-network.a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8" Workload="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--6zdb5-eth0" Jan 14 00:04:50.568375 containerd[2133]: 2026-01-14 00:04:50.492 [INFO][5430] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8" HandleID="k8s-pod-network.a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8" Workload="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--6zdb5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3110), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-n-d5ef04779b", "pod":"coredns-668d6bf9bc-6zdb5", "timestamp":"2026-01-14 00:04:50.492244343 +0000 UTC"}, Hostname:"ci-4547.0.0-n-d5ef04779b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:04:50.568375 containerd[2133]: 2026-01-14 00:04:50.492 [INFO][5430] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:04:50.568375 containerd[2133]: 2026-01-14 00:04:50.492 [INFO][5430] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:04:50.568375 containerd[2133]: 2026-01-14 00:04:50.492 [INFO][5430] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-d5ef04779b' Jan 14 00:04:50.568375 containerd[2133]: 2026-01-14 00:04:50.499 [INFO][5430] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.568375 containerd[2133]: 2026-01-14 00:04:50.505 [INFO][5430] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.568375 containerd[2133]: 2026-01-14 00:04:50.510 [INFO][5430] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.568375 containerd[2133]: 2026-01-14 00:04:50.512 [INFO][5430] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.568375 containerd[2133]: 2026-01-14 00:04:50.514 [INFO][5430] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.568375 containerd[2133]: 2026-01-14 00:04:50.514 [INFO][5430] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.568375 containerd[2133]: 2026-01-14 00:04:50.517 [INFO][5430] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8 Jan 14 00:04:50.568375 containerd[2133]: 2026-01-14 00:04:50.529 [INFO][5430] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.568375 containerd[2133]: 2026-01-14 00:04:50.535 [INFO][5430] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.6/26] block=192.168.70.0/26 handle="k8s-pod-network.a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.568375 containerd[2133]: 2026-01-14 00:04:50.535 [INFO][5430] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.6/26] handle="k8s-pod-network.a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.568375 containerd[2133]: 2026-01-14 00:04:50.535 [INFO][5430] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:04:50.568375 containerd[2133]: 2026-01-14 00:04:50.535 [INFO][5430] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.6/26] IPv6=[] ContainerID="a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8" HandleID="k8s-pod-network.a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8" Workload="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--6zdb5-eth0" Jan 14 00:04:50.570683 containerd[2133]: 2026-01-14 00:04:50.537 [INFO][5390] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8" Namespace="kube-system" Pod="coredns-668d6bf9bc-6zdb5" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--6zdb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--6zdb5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"804e2c94-fdab-4f15-b317-31c3221bea29", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 4, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-d5ef04779b", ContainerID:"", Pod:"coredns-668d6bf9bc-6zdb5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali651251c146d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:04:50.570683 containerd[2133]: 2026-01-14 00:04:50.538 [INFO][5390] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.6/32] ContainerID="a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8" Namespace="kube-system" Pod="coredns-668d6bf9bc-6zdb5" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--6zdb5-eth0" Jan 14 00:04:50.570683 containerd[2133]: 2026-01-14 00:04:50.538 [INFO][5390] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali651251c146d ContainerID="a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8" Namespace="kube-system" Pod="coredns-668d6bf9bc-6zdb5" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--6zdb5-eth0" Jan 14 00:04:50.570683 containerd[2133]: 2026-01-14 00:04:50.549 [INFO][5390] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8" Namespace="kube-system" Pod="coredns-668d6bf9bc-6zdb5" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--6zdb5-eth0" Jan 14 00:04:50.570683 containerd[2133]: 2026-01-14 00:04:50.553 [INFO][5390] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8" Namespace="kube-system" Pod="coredns-668d6bf9bc-6zdb5" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--6zdb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--6zdb5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"804e2c94-fdab-4f15-b317-31c3221bea29", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 4, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-d5ef04779b", ContainerID:"a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8", Pod:"coredns-668d6bf9bc-6zdb5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali651251c146d", MAC:"fe:26:91:ee:4d:37", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:04:50.570683 containerd[2133]: 2026-01-14 00:04:50.565 [INFO][5390] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8" Namespace="kube-system" Pod="coredns-668d6bf9bc-6zdb5" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-coredns--668d6bf9bc--6zdb5-eth0" Jan 14 00:04:50.577279 kubelet[3683]: E0114 00:04:50.577228 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-vxthm" podUID="23297b6d-ba28-4d3e-a11a-c82aeab97bbe" Jan 14 00:04:50.579444 kubelet[3683]: E0114 00:04:50.578739 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgz55" podUID="f0601279-098f-420b-84a8-b4028d2c0ea2" Jan 14 00:04:50.588000 audit[5460]: NETFILTER_CFG table=filter:138 family=2 entries=48 op=nft_register_chain pid=5460 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:04:50.595180 kernel: kauditd_printk_skb: 350 callbacks suppressed Jan 14 00:04:50.595295 kernel: audit: type=1325 audit(1768349090.588:721): table=filter:138 family=2 entries=48 op=nft_register_chain pid=5460 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:04:50.588000 audit[5460]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22720 a0=3 a1=ffffd6a7f340 a2=0 a3=ffffa2301fa8 items=0 ppid=4814 pid=5460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.624466 kernel: audit: type=1300 audit(1768349090.588:721): arch=c00000b7 syscall=211 success=yes exit=22720 a0=3 a1=ffffd6a7f340 a2=0 a3=ffffa2301fa8 items=0 ppid=4814 pid=5460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.588000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:04:50.642320 kernel: audit: type=1327 audit(1768349090.588:721): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:04:50.647750 containerd[2133]: time="2026-01-14T00:04:50.646537197Z" level=info msg="connecting to shim a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8" address="unix:///run/containerd/s/1e08e3901b9bd6cdc77bc88206b16d8fde48b8d1db96c32c61bc43c6068fdc51" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:04:50.672000 audit[5487]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=5487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:50.685340 kernel: audit: type=1325 audit(1768349090.672:722): table=filter:139 family=2 entries=14 op=nft_register_rule pid=5487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:50.672000 audit[5487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdeeaaee0 a2=0 a3=1 items=0 ppid=3789 pid=5487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.672000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:50.715492 kernel: audit: type=1300 audit(1768349090.672:722): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdeeaaee0 a2=0 a3=1 items=0 ppid=3789 pid=5487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.715609 kernel: audit: type=1327 audit(1768349090.672:722): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:50.685000 audit[5487]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=5487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:50.726549 kernel: audit: type=1325 audit(1768349090.685:723): table=nat:140 family=2 entries=20 op=nft_register_rule pid=5487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:50.685000 audit[5487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdeeaaee0 a2=0 a3=1 items=0 ppid=3789 pid=5487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.746076 kernel: audit: type=1300 audit(1768349090.685:723): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdeeaaee0 a2=0 a3=1 items=0 ppid=3789 pid=5487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.685000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:50.757199 kernel: audit: type=1327 audit(1768349090.685:723): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:50.759539 systemd[1]: Started cri-containerd-a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8.scope - libcontainer container a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8. Jan 14 00:04:50.769754 systemd-networkd[1723]: cali924d356d23d: Link UP Jan 14 00:04:50.771695 systemd-networkd[1723]: cali924d356d23d: Gained carrier Jan 14 00:04:50.787000 audit: BPF prog-id=260 op=LOAD Jan 14 00:04:50.793607 kernel: audit: type=1334 audit(1768349090.787:724): prog-id=260 op=LOAD Jan 14 00:04:50.792000 audit: BPF prog-id=261 op=LOAD Jan 14 00:04:50.792000 audit[5482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=5470 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136366164623233366661316230653864666631613763663535383961 Jan 14 00:04:50.792000 audit: BPF prog-id=261 op=UNLOAD Jan 14 00:04:50.792000 audit[5482]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5470 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136366164623233366661316230653864666631613763663535383961 Jan 14 00:04:50.793000 audit: BPF prog-id=262 op=LOAD Jan 14 00:04:50.793000 audit[5482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=5470 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136366164623233366661316230653864666631613763663535383961 Jan 14 00:04:50.793000 audit: BPF prog-id=263 op=LOAD Jan 14 00:04:50.793000 audit[5482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=5470 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136366164623233366661316230653864666631613763663535383961 Jan 14 00:04:50.794000 audit: BPF prog-id=263 op=UNLOAD Jan 14 00:04:50.794000 audit[5482]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5470 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.794000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136366164623233366661316230653864666631613763663535383961 Jan 14 00:04:50.794000 audit: BPF prog-id=262 op=UNLOAD Jan 14 00:04:50.794000 audit[5482]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5470 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.794000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136366164623233366661316230653864666631613763663535383961 Jan 14 00:04:50.795000 audit: BPF prog-id=264 op=LOAD Jan 14 00:04:50.795000 audit[5482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=5470 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136366164623233366661316230653864666631613763663535383961 Jan 14 00:04:50.801990 containerd[2133]: 2026-01-14 00:04:50.475 [INFO][5394] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--t7bd6-eth0 calico-apiserver-766dfc88bb- calico-apiserver 52b59e1e-92be-4298-96f1-1e43387d21fa 800 0 2026-01-14 00:04:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:766dfc88bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-n-d5ef04779b calico-apiserver-766dfc88bb-t7bd6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali924d356d23d [] [] }} ContainerID="0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d" Namespace="calico-apiserver" Pod="calico-apiserver-766dfc88bb-t7bd6" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--t7bd6-" Jan 14 00:04:50.801990 containerd[2133]: 2026-01-14 00:04:50.477 [INFO][5394] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d" Namespace="calico-apiserver" Pod="calico-apiserver-766dfc88bb-t7bd6" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--t7bd6-eth0" Jan 14 00:04:50.801990 containerd[2133]: 2026-01-14 00:04:50.509 [INFO][5436] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d" HandleID="k8s-pod-network.0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d" Workload="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--t7bd6-eth0" Jan 14 00:04:50.801990 containerd[2133]: 2026-01-14 00:04:50.509 [INFO][5436] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d" HandleID="k8s-pod-network.0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d" Workload="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--t7bd6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-n-d5ef04779b", "pod":"calico-apiserver-766dfc88bb-t7bd6", "timestamp":"2026-01-14 00:04:50.509006986 +0000 UTC"}, Hostname:"ci-4547.0.0-n-d5ef04779b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:04:50.801990 containerd[2133]: 2026-01-14 00:04:50.509 [INFO][5436] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:04:50.801990 containerd[2133]: 2026-01-14 00:04:50.535 [INFO][5436] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:04:50.801990 containerd[2133]: 2026-01-14 00:04:50.535 [INFO][5436] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-d5ef04779b' Jan 14 00:04:50.801990 containerd[2133]: 2026-01-14 00:04:50.607 [INFO][5436] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.801990 containerd[2133]: 2026-01-14 00:04:50.657 [INFO][5436] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.801990 containerd[2133]: 2026-01-14 00:04:50.669 [INFO][5436] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.801990 containerd[2133]: 2026-01-14 00:04:50.671 [INFO][5436] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.801990 containerd[2133]: 2026-01-14 00:04:50.685 [INFO][5436] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.801990 containerd[2133]: 2026-01-14 00:04:50.685 [INFO][5436] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.801990 containerd[2133]: 2026-01-14 00:04:50.716 [INFO][5436] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d Jan 14 00:04:50.801990 containerd[2133]: 2026-01-14 00:04:50.730 [INFO][5436] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.801990 containerd[2133]: 2026-01-14 00:04:50.753 [INFO][5436] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.7/26] block=192.168.70.0/26 handle="k8s-pod-network.0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.801990 containerd[2133]: 2026-01-14 00:04:50.753 [INFO][5436] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.7/26] handle="k8s-pod-network.0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.801990 containerd[2133]: 2026-01-14 00:04:50.753 [INFO][5436] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:04:50.801990 containerd[2133]: 2026-01-14 00:04:50.753 [INFO][5436] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.7/26] IPv6=[] ContainerID="0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d" HandleID="k8s-pod-network.0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d" Workload="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--t7bd6-eth0" Jan 14 00:04:50.803541 containerd[2133]: 2026-01-14 00:04:50.761 [INFO][5394] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d" Namespace="calico-apiserver" Pod="calico-apiserver-766dfc88bb-t7bd6" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--t7bd6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--t7bd6-eth0", GenerateName:"calico-apiserver-766dfc88bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"52b59e1e-92be-4298-96f1-1e43387d21fa", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"766dfc88bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-d5ef04779b", ContainerID:"", Pod:"calico-apiserver-766dfc88bb-t7bd6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali924d356d23d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:04:50.803541 containerd[2133]: 2026-01-14 00:04:50.762 [INFO][5394] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.7/32] ContainerID="0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d" Namespace="calico-apiserver" Pod="calico-apiserver-766dfc88bb-t7bd6" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--t7bd6-eth0" Jan 14 00:04:50.803541 containerd[2133]: 2026-01-14 00:04:50.762 [INFO][5394] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali924d356d23d ContainerID="0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d" Namespace="calico-apiserver" Pod="calico-apiserver-766dfc88bb-t7bd6" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--t7bd6-eth0" Jan 14 00:04:50.803541 containerd[2133]: 2026-01-14 00:04:50.779 [INFO][5394] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d" Namespace="calico-apiserver" Pod="calico-apiserver-766dfc88bb-t7bd6" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--t7bd6-eth0" Jan 14 00:04:50.803541 containerd[2133]: 2026-01-14 00:04:50.780 [INFO][5394] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d" Namespace="calico-apiserver" Pod="calico-apiserver-766dfc88bb-t7bd6" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--t7bd6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--t7bd6-eth0", GenerateName:"calico-apiserver-766dfc88bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"52b59e1e-92be-4298-96f1-1e43387d21fa", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"766dfc88bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-d5ef04779b", ContainerID:"0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d", Pod:"calico-apiserver-766dfc88bb-t7bd6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali924d356d23d", MAC:"be:e0:f2:1c:e0:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:04:50.803541 containerd[2133]: 2026-01-14 00:04:50.797 [INFO][5394] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d" Namespace="calico-apiserver" Pod="calico-apiserver-766dfc88bb-t7bd6" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-calico--apiserver--766dfc88bb--t7bd6-eth0" Jan 14 00:04:50.830000 audit[5515]: NETFILTER_CFG table=filter:141 family=2 entries=57 op=nft_register_chain pid=5515 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:04:50.830000 audit[5515]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27828 a0=3 a1=ffffc420ce10 a2=0 a3=ffffb4aa8fa8 items=0 ppid=4814 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.830000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:04:50.837039 systemd-networkd[1723]: cali57abe578aef: Link UP Jan 14 00:04:50.838091 systemd-networkd[1723]: cali57abe578aef: Gained carrier Jan 14 00:04:50.860341 containerd[2133]: 2026-01-14 00:04:50.475 [INFO][5416] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--d5ef04779b-k8s-goldmane--666569f655--pdm6c-eth0 goldmane-666569f655- calico-system dd23407e-e7fa-43bd-b827-67d8fab88d3b 802 0 2026-01-14 00:04:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547.0.0-n-d5ef04779b goldmane-666569f655-pdm6c eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali57abe578aef [] [] }} ContainerID="b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051" Namespace="calico-system" Pod="goldmane-666569f655-pdm6c" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-goldmane--666569f655--pdm6c-" Jan 14 00:04:50.860341 containerd[2133]: 2026-01-14 00:04:50.475 [INFO][5416] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051" Namespace="calico-system" Pod="goldmane-666569f655-pdm6c" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-goldmane--666569f655--pdm6c-eth0" Jan 14 00:04:50.860341 containerd[2133]: 2026-01-14 00:04:50.558 [INFO][5444] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051" HandleID="k8s-pod-network.b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051" Workload="ci--4547.0.0--n--d5ef04779b-k8s-goldmane--666569f655--pdm6c-eth0" Jan 14 00:04:50.860341 containerd[2133]: 2026-01-14 00:04:50.558 [INFO][5444] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051" HandleID="k8s-pod-network.b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051" Workload="ci--4547.0.0--n--d5ef04779b-k8s-goldmane--666569f655--pdm6c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-n-d5ef04779b", "pod":"goldmane-666569f655-pdm6c", "timestamp":"2026-01-14 00:04:50.558775737 +0000 UTC"}, Hostname:"ci-4547.0.0-n-d5ef04779b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:04:50.860341 containerd[2133]: 2026-01-14 00:04:50.559 [INFO][5444] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:04:50.860341 containerd[2133]: 2026-01-14 00:04:50.758 [INFO][5444] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:04:50.860341 containerd[2133]: 2026-01-14 00:04:50.758 [INFO][5444] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-d5ef04779b' Jan 14 00:04:50.860341 containerd[2133]: 2026-01-14 00:04:50.778 [INFO][5444] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.860341 containerd[2133]: 2026-01-14 00:04:50.784 [INFO][5444] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.860341 containerd[2133]: 2026-01-14 00:04:50.800 [INFO][5444] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.860341 containerd[2133]: 2026-01-14 00:04:50.804 [INFO][5444] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.860341 containerd[2133]: 2026-01-14 00:04:50.808 [INFO][5444] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.860341 containerd[2133]: 2026-01-14 00:04:50.808 [INFO][5444] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.860341 containerd[2133]: 2026-01-14 00:04:50.811 [INFO][5444] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051 Jan 14 00:04:50.860341 containerd[2133]: 2026-01-14 00:04:50.817 [INFO][5444] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.860341 containerd[2133]: 2026-01-14 00:04:50.828 [INFO][5444] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.8/26] block=192.168.70.0/26 handle="k8s-pod-network.b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.860341 containerd[2133]: 2026-01-14 00:04:50.829 [INFO][5444] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.8/26] handle="k8s-pod-network.b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051" host="ci-4547.0.0-n-d5ef04779b" Jan 14 00:04:50.860341 containerd[2133]: 2026-01-14 00:04:50.829 [INFO][5444] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:04:50.860341 containerd[2133]: 2026-01-14 00:04:50.829 [INFO][5444] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.8/26] IPv6=[] ContainerID="b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051" HandleID="k8s-pod-network.b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051" Workload="ci--4547.0.0--n--d5ef04779b-k8s-goldmane--666569f655--pdm6c-eth0" Jan 14 00:04:50.860775 containerd[2133]: 2026-01-14 00:04:50.833 [INFO][5416] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051" Namespace="calico-system" Pod="goldmane-666569f655-pdm6c" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-goldmane--666569f655--pdm6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--d5ef04779b-k8s-goldmane--666569f655--pdm6c-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"dd23407e-e7fa-43bd-b827-67d8fab88d3b", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 4, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-d5ef04779b", ContainerID:"", Pod:"goldmane-666569f655-pdm6c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.70.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali57abe578aef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:04:50.860775 containerd[2133]: 2026-01-14 00:04:50.833 [INFO][5416] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.8/32] ContainerID="b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051" Namespace="calico-system" Pod="goldmane-666569f655-pdm6c" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-goldmane--666569f655--pdm6c-eth0" Jan 14 00:04:50.860775 containerd[2133]: 2026-01-14 00:04:50.833 [INFO][5416] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57abe578aef ContainerID="b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051" Namespace="calico-system" Pod="goldmane-666569f655-pdm6c" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-goldmane--666569f655--pdm6c-eth0" Jan 14 00:04:50.860775 containerd[2133]: 2026-01-14 00:04:50.839 [INFO][5416] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051" Namespace="calico-system" Pod="goldmane-666569f655-pdm6c" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-goldmane--666569f655--pdm6c-eth0" Jan 14 00:04:50.860775 containerd[2133]: 2026-01-14 00:04:50.839 [INFO][5416] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051" Namespace="calico-system" Pod="goldmane-666569f655-pdm6c" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-goldmane--666569f655--pdm6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--d5ef04779b-k8s-goldmane--666569f655--pdm6c-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"dd23407e-e7fa-43bd-b827-67d8fab88d3b", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 4, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-d5ef04779b", ContainerID:"b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051", Pod:"goldmane-666569f655-pdm6c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.70.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali57abe578aef", MAC:"2a:de:1c:71:b4:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:04:50.860775 containerd[2133]: 2026-01-14 00:04:50.852 [INFO][5416] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051" Namespace="calico-system" Pod="goldmane-666569f655-pdm6c" WorkloadEndpoint="ci--4547.0.0--n--d5ef04779b-k8s-goldmane--666569f655--pdm6c-eth0" Jan 14 00:04:50.868311 containerd[2133]: time="2026-01-14T00:04:50.868225487Z" level=info msg="connecting to shim 0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d" address="unix:///run/containerd/s/e2f56afa9c8f2bac4c3d9838de16578b58ab9c51a92a32d42e017a83c0d6fbb1" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:04:50.888709 containerd[2133]: time="2026-01-14T00:04:50.888592095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6zdb5,Uid:804e2c94-fdab-4f15-b317-31c3221bea29,Namespace:kube-system,Attempt:0,} returns sandbox id \"a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8\"" Jan 14 00:04:50.887000 audit[5550]: NETFILTER_CFG table=filter:142 family=2 entries=68 op=nft_register_chain pid=5550 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:04:50.887000 audit[5550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=32308 a0=3 a1=ffffc50e0020 a2=0 a3=ffff845ecfa8 items=0 ppid=4814 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.887000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:04:50.892136 containerd[2133]: time="2026-01-14T00:04:50.891874892Z" level=info msg="CreateContainer within sandbox \"a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 00:04:50.912463 systemd[1]: Started cri-containerd-0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d.scope - libcontainer container 0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d. Jan 14 00:04:50.924000 audit: BPF prog-id=265 op=LOAD Jan 14 00:04:50.925000 audit: BPF prog-id=266 op=LOAD Jan 14 00:04:50.925000 audit[5549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5537 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063636230396263346162313533323031663933356635386162396430 Jan 14 00:04:50.925000 audit: BPF prog-id=266 op=UNLOAD Jan 14 00:04:50.925000 audit[5549]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5537 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063636230396263346162313533323031663933356635386162396430 Jan 14 00:04:50.926000 audit: BPF prog-id=267 op=LOAD Jan 14 00:04:50.926000 audit[5549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5537 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063636230396263346162313533323031663933356635386162396430 Jan 14 00:04:50.926000 audit: BPF prog-id=268 op=LOAD Jan 14 00:04:50.926000 audit[5549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5537 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063636230396263346162313533323031663933356635386162396430 Jan 14 00:04:50.927000 audit: BPF prog-id=268 op=UNLOAD Jan 14 00:04:50.927000 audit[5549]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5537 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063636230396263346162313533323031663933356635386162396430 Jan 14 00:04:50.927000 audit: BPF prog-id=267 op=UNLOAD Jan 14 00:04:50.927000 audit[5549]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5537 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063636230396263346162313533323031663933356635386162396430 Jan 14 00:04:50.928000 audit: BPF prog-id=269 op=LOAD Jan 14 00:04:50.928000 audit[5549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5537 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:50.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063636230396263346162313533323031663933356635386162396430 Jan 14 00:04:50.930109 containerd[2133]: time="2026-01-14T00:04:50.926755767Z" level=info msg="Container 39f163619cc97114f5994ab922d1cd8abe64857e3bf406eb20babb3323b523e8: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:04:50.955978 containerd[2133]: time="2026-01-14T00:04:50.955933986Z" level=info msg="CreateContainer within sandbox \"a66adb236fa1b0e8dff1a7cf5589a7da3360379d5f41a1a203175ef87c879ac8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"39f163619cc97114f5994ab922d1cd8abe64857e3bf406eb20babb3323b523e8\"" Jan 14 00:04:50.956783 containerd[2133]: time="2026-01-14T00:04:50.956759131Z" level=info msg="StartContainer for \"39f163619cc97114f5994ab922d1cd8abe64857e3bf406eb20babb3323b523e8\"" Jan 14 00:04:50.957869 containerd[2133]: time="2026-01-14T00:04:50.957841074Z" level=info msg="connecting to shim 39f163619cc97114f5994ab922d1cd8abe64857e3bf406eb20babb3323b523e8" address="unix:///run/containerd/s/1e08e3901b9bd6cdc77bc88206b16d8fde48b8d1db96c32c61bc43c6068fdc51" protocol=ttrpc version=3 Jan 14 00:04:50.960461 containerd[2133]: time="2026-01-14T00:04:50.960399009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766dfc88bb-t7bd6,Uid:52b59e1e-92be-4298-96f1-1e43387d21fa,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0ccb09bc4ab153201f935f58ab9d007bee2efb93db36cf577a0d1e6c021bb68d\"" Jan 14 00:04:50.962643 containerd[2133]: time="2026-01-14T00:04:50.962573463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:04:50.971049 containerd[2133]: time="2026-01-14T00:04:50.970941912Z" level=info msg="connecting to shim b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051" address="unix:///run/containerd/s/dcd62d94709526cc31f2df3e3df0a0528d1557adabe167f1182daa4d4c9e45b9" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:04:50.985469 systemd[1]: Started cri-containerd-39f163619cc97114f5994ab922d1cd8abe64857e3bf406eb20babb3323b523e8.scope - libcontainer container 39f163619cc97114f5994ab922d1cd8abe64857e3bf406eb20babb3323b523e8. Jan 14 00:04:51.006375 systemd[1]: Started cri-containerd-b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051.scope - libcontainer container b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051. Jan 14 00:04:51.009000 audit: BPF prog-id=270 op=LOAD Jan 14 00:04:51.010000 audit: BPF prog-id=271 op=LOAD Jan 14 00:04:51.010000 audit[5581]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5470 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:51.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339663136333631396363393731313466353939346162393232643163 Jan 14 00:04:51.010000 audit: BPF prog-id=271 op=UNLOAD Jan 14 00:04:51.010000 audit[5581]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5470 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:51.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339663136333631396363393731313466353939346162393232643163 Jan 14 00:04:51.011000 audit: BPF prog-id=272 op=LOAD Jan 14 00:04:51.011000 audit[5581]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5470 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:51.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339663136333631396363393731313466353939346162393232643163 Jan 14 00:04:51.011000 audit: BPF prog-id=273 op=LOAD Jan 14 00:04:51.011000 audit[5581]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5470 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:51.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339663136333631396363393731313466353939346162393232643163 Jan 14 00:04:51.012000 audit: BPF prog-id=273 op=UNLOAD Jan 14 00:04:51.012000 audit[5581]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5470 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:51.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339663136333631396363393731313466353939346162393232643163 Jan 14 00:04:51.012000 audit: BPF prog-id=272 op=UNLOAD Jan 14 00:04:51.012000 audit[5581]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5470 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:51.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339663136333631396363393731313466353939346162393232643163 Jan 14 00:04:51.012000 audit: BPF prog-id=274 op=LOAD Jan 14 00:04:51.012000 audit[5581]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5470 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:51.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339663136333631396363393731313466353939346162393232643163 Jan 14 00:04:51.025000 audit: BPF prog-id=275 op=LOAD Jan 14 00:04:51.026000 audit: BPF prog-id=276 op=LOAD Jan 14 00:04:51.026000 audit[5611]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=5594 pid=5611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:51.026000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238366237393364633636633433343865643437656366626631383564 Jan 14 00:04:51.026000 audit: BPF prog-id=276 op=UNLOAD Jan 14 00:04:51.026000 audit[5611]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5594 pid=5611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:51.026000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238366237393364633636633433343865643437656366626631383564 Jan 14 00:04:51.026000 audit: BPF prog-id=277 op=LOAD Jan 14 00:04:51.026000 audit[5611]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=5594 pid=5611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:51.026000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238366237393364633636633433343865643437656366626631383564 Jan 14 00:04:51.026000 audit: BPF prog-id=278 op=LOAD Jan 14 00:04:51.026000 audit[5611]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=5594 pid=5611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:51.026000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238366237393364633636633433343865643437656366626631383564 Jan 14 00:04:51.026000 audit: BPF prog-id=278 op=UNLOAD Jan 14 00:04:51.026000 audit[5611]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5594 pid=5611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:51.026000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238366237393364633636633433343865643437656366626631383564 Jan 14 00:04:51.026000 audit: BPF prog-id=277 op=UNLOAD Jan 14 00:04:51.026000 audit[5611]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5594 pid=5611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:51.026000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238366237393364633636633433343865643437656366626631383564 Jan 14 00:04:51.026000 audit: BPF prog-id=279 op=LOAD Jan 14 00:04:51.026000 audit[5611]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=5594 pid=5611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:51.026000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238366237393364633636633433343865643437656366626631383564 Jan 14 00:04:51.069644 containerd[2133]: time="2026-01-14T00:04:51.069381830Z" level=info msg="StartContainer for \"39f163619cc97114f5994ab922d1cd8abe64857e3bf406eb20babb3323b523e8\" returns successfully" Jan 14 00:04:51.072482 containerd[2133]: time="2026-01-14T00:04:51.072393902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pdm6c,Uid:dd23407e-e7fa-43bd-b827-67d8fab88d3b,Namespace:calico-system,Attempt:0,} returns sandbox id \"b86b793dc66c4348ed47ecfbf185d34e4b9a9a4387b009c42d284c6454536051\"" Jan 14 00:04:51.204885 containerd[2133]: time="2026-01-14T00:04:51.204759579Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:04:51.207430 systemd-networkd[1723]: cali460d285620e: Gained IPv6LL Jan 14 00:04:51.208420 containerd[2133]: time="2026-01-14T00:04:51.208371248Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:04:51.208525 containerd[2133]: time="2026-01-14T00:04:51.208479650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:04:51.208741 kubelet[3683]: E0114 00:04:51.208648 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:04:51.208741 kubelet[3683]: E0114 00:04:51.208715 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:04:51.210148 kubelet[3683]: E0114 00:04:51.208920 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sr9bj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-766dfc88bb-t7bd6_calico-apiserver(52b59e1e-92be-4298-96f1-1e43387d21fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:04:51.210148 kubelet[3683]: E0114 00:04:51.210018 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-t7bd6" podUID="52b59e1e-92be-4298-96f1-1e43387d21fa" Jan 14 00:04:51.210342 containerd[2133]: time="2026-01-14T00:04:51.210327921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 00:04:51.444950 containerd[2133]: time="2026-01-14T00:04:51.444740993Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:04:51.448961 containerd[2133]: time="2026-01-14T00:04:51.448911425Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 00:04:51.449208 containerd[2133]: time="2026-01-14T00:04:51.449116189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 00:04:51.449506 kubelet[3683]: E0114 00:04:51.449457 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:04:51.449692 kubelet[3683]: E0114 00:04:51.449626 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:04:51.450031 kubelet[3683]: E0114 00:04:51.449973 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xl4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pdm6c_calico-system(dd23407e-e7fa-43bd-b827-67d8fab88d3b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 00:04:51.451180 kubelet[3683]: E0114 00:04:51.451128 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pdm6c" podUID="dd23407e-e7fa-43bd-b827-67d8fab88d3b" Jan 14 00:04:51.577600 kubelet[3683]: E0114 00:04:51.577461 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-t7bd6" podUID="52b59e1e-92be-4298-96f1-1e43387d21fa" Jan 14 00:04:51.580990 kubelet[3683]: E0114 00:04:51.580781 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pdm6c" podUID="dd23407e-e7fa-43bd-b827-67d8fab88d3b" Jan 14 00:04:51.584948 kubelet[3683]: E0114 00:04:51.584906 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-vxthm" podUID="23297b6d-ba28-4d3e-a11a-c82aeab97bbe" Jan 14 00:04:51.615000 audit[5662]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5662 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:51.615000 audit[5662]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffffad0830 a2=0 a3=1 items=0 ppid=3789 pid=5662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:51.615000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:51.623000 audit[5662]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5662 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:51.623000 audit[5662]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffffad0830 a2=0 a3=1 items=0 ppid=3789 pid=5662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:51.623000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:51.657200 kubelet[3683]: I0114 00:04:51.656955 3683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6zdb5" podStartSLOduration=43.65693489 podStartE2EDuration="43.65693489s" podCreationTimestamp="2026-01-14 00:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:04:51.640629304 +0000 UTC m=+49.346140768" watchObservedRunningTime="2026-01-14 00:04:51.65693489 +0000 UTC m=+49.362446346" Jan 14 00:04:51.847525 systemd-networkd[1723]: cali651251c146d: Gained IPv6LL Jan 14 00:04:52.231455 systemd-networkd[1723]: cali57abe578aef: Gained IPv6LL Jan 14 00:04:52.551395 systemd-networkd[1723]: cali924d356d23d: Gained IPv6LL Jan 14 00:04:52.586771 kubelet[3683]: E0114 00:04:52.586704 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pdm6c" podUID="dd23407e-e7fa-43bd-b827-67d8fab88d3b" Jan 14 00:04:52.587787 kubelet[3683]: E0114 00:04:52.586903 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-t7bd6" podUID="52b59e1e-92be-4298-96f1-1e43387d21fa" Jan 14 00:04:52.631000 audit[5664]: NETFILTER_CFG table=filter:145 family=2 entries=14 op=nft_register_rule pid=5664 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:52.631000 audit[5664]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff340ffb0 a2=0 a3=1 items=0 ppid=3789 pid=5664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:52.631000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:52.652000 audit[5664]: NETFILTER_CFG table=nat:146 family=2 entries=56 op=nft_register_chain pid=5664 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:04:52.652000 audit[5664]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=fffff340ffb0 a2=0 a3=1 items=0 ppid=3789 pid=5664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:04:52.652000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:04:58.383366 containerd[2133]: time="2026-01-14T00:04:58.383214437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 00:04:58.614560 containerd[2133]: time="2026-01-14T00:04:58.614362334Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:04:58.617665 containerd[2133]: time="2026-01-14T00:04:58.617624204Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 00:04:58.617914 containerd[2133]: time="2026-01-14T00:04:58.617783015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 00:04:58.618088 kubelet[3683]: E0114 00:04:58.618039 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:04:58.618385 kubelet[3683]: E0114 00:04:58.618096 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:04:58.618385 kubelet[3683]: E0114 00:04:58.618212 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2c5ec5d7d72d44948e0cbef03a4dbb30,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wdl2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67f75d98f9-d4x2n_calico-system(ef75aa50-7d3a-4b9a-95a7-c344bbb8239a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 00:04:58.621188 containerd[2133]: time="2026-01-14T00:04:58.621139816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 00:04:58.866180 containerd[2133]: time="2026-01-14T00:04:58.866110449Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:04:58.869451 containerd[2133]: time="2026-01-14T00:04:58.869400000Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 00:04:58.869562 containerd[2133]: time="2026-01-14T00:04:58.869493570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 00:04:58.869733 kubelet[3683]: E0114 00:04:58.869692 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:04:58.869785 kubelet[3683]: E0114 00:04:58.869745 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:04:58.869893 kubelet[3683]: E0114 00:04:58.869863 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdl2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67f75d98f9-d4x2n_calico-system(ef75aa50-7d3a-4b9a-95a7-c344bbb8239a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 00:04:58.871142 kubelet[3683]: E0114 00:04:58.871104 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67f75d98f9-d4x2n" podUID="ef75aa50-7d3a-4b9a-95a7-c344bbb8239a" Jan 14 00:05:02.384567 containerd[2133]: time="2026-01-14T00:05:02.384066603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 00:05:02.664745 containerd[2133]: time="2026-01-14T00:05:02.664545050Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:05:02.669381 containerd[2133]: time="2026-01-14T00:05:02.669270450Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 00:05:02.669381 containerd[2133]: time="2026-01-14T00:05:02.669328267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 00:05:02.669797 kubelet[3683]: E0114 00:05:02.669662 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:05:02.670751 kubelet[3683]: E0114 00:05:02.670197 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:05:02.670751 kubelet[3683]: E0114 00:05:02.670417 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hhqbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hgz55_calico-system(f0601279-098f-420b-84a8-b4028d2c0ea2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 00:05:02.671746 containerd[2133]: time="2026-01-14T00:05:02.671546852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 00:05:02.942699 containerd[2133]: time="2026-01-14T00:05:02.942542329Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:05:02.946703 containerd[2133]: time="2026-01-14T00:05:02.946580668Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 00:05:02.946703 containerd[2133]: time="2026-01-14T00:05:02.946642701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 00:05:02.946871 kubelet[3683]: E0114 00:05:02.946807 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:05:02.946871 kubelet[3683]: E0114 00:05:02.946854 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:05:02.947220 kubelet[3683]: E0114 00:05:02.947107 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ttqb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79cf4bbcf4-jhdlg_calico-system(52895973-a9d8-41ff-890b-151c819ea908): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 00:05:02.947594 containerd[2133]: time="2026-01-14T00:05:02.947129862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 00:05:02.948354 kubelet[3683]: E0114 00:05:02.948292 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79cf4bbcf4-jhdlg" podUID="52895973-a9d8-41ff-890b-151c819ea908" Jan 14 00:05:03.200132 containerd[2133]: time="2026-01-14T00:05:03.199783801Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:05:03.204110 containerd[2133]: time="2026-01-14T00:05:03.204004823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 00:05:03.204110 containerd[2133]: time="2026-01-14T00:05:03.204060536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 00:05:03.204328 kubelet[3683]: E0114 00:05:03.204272 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:05:03.204364 kubelet[3683]: E0114 00:05:03.204336 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:05:03.204478 kubelet[3683]: E0114 00:05:03.204440 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hhqbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hgz55_calico-system(f0601279-098f-420b-84a8-b4028d2c0ea2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 00:05:03.205756 kubelet[3683]: E0114 00:05:03.205694 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgz55" podUID="f0601279-098f-420b-84a8-b4028d2c0ea2" Jan 14 00:05:03.382201 containerd[2133]: time="2026-01-14T00:05:03.382086923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 00:05:03.696566 containerd[2133]: time="2026-01-14T00:05:03.696404296Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:05:03.700642 containerd[2133]: time="2026-01-14T00:05:03.700524764Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 00:05:03.700642 containerd[2133]: time="2026-01-14T00:05:03.700576421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 00:05:03.700957 kubelet[3683]: E0114 00:05:03.700917 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:05:03.701517 kubelet[3683]: E0114 00:05:03.701277 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:05:03.701517 kubelet[3683]: E0114 00:05:03.701428 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xl4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pdm6c_calico-system(dd23407e-e7fa-43bd-b827-67d8fab88d3b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 00:05:03.702986 kubelet[3683]: E0114 00:05:03.702933 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pdm6c" podUID="dd23407e-e7fa-43bd-b827-67d8fab88d3b" Jan 14 00:05:04.382009 containerd[2133]: time="2026-01-14T00:05:04.381782856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:05:04.647724 containerd[2133]: time="2026-01-14T00:05:04.647683016Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:05:04.651167 containerd[2133]: time="2026-01-14T00:05:04.651124391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:05:04.651248 containerd[2133]: time="2026-01-14T00:05:04.651221913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:05:04.651945 kubelet[3683]: E0114 00:05:04.651410 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:05:04.651945 kubelet[3683]: E0114 00:05:04.651459 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:05:04.651945 kubelet[3683]: E0114 00:05:04.651561 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sr9bj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-766dfc88bb-t7bd6_calico-apiserver(52b59e1e-92be-4298-96f1-1e43387d21fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:05:04.652996 kubelet[3683]: E0114 00:05:04.652960 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-t7bd6" podUID="52b59e1e-92be-4298-96f1-1e43387d21fa" Jan 14 00:05:05.382025 containerd[2133]: time="2026-01-14T00:05:05.381952293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:05:05.647611 containerd[2133]: time="2026-01-14T00:05:05.647470678Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:05:05.651678 containerd[2133]: time="2026-01-14T00:05:05.651537841Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:05:05.651892 containerd[2133]: time="2026-01-14T00:05:05.651667164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:05:05.652171 kubelet[3683]: E0114 00:05:05.652119 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:05:05.652484 kubelet[3683]: E0114 00:05:05.652191 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:05:05.652484 kubelet[3683]: E0114 00:05:05.652333 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxtgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-766dfc88bb-vxthm_calico-apiserver(23297b6d-ba28-4d3e-a11a-c82aeab97bbe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:05:05.653812 kubelet[3683]: E0114 00:05:05.653771 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-vxthm" podUID="23297b6d-ba28-4d3e-a11a-c82aeab97bbe" Jan 14 00:05:13.382318 kubelet[3683]: E0114 00:05:13.382216 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67f75d98f9-d4x2n" podUID="ef75aa50-7d3a-4b9a-95a7-c344bbb8239a" Jan 14 00:05:14.386321 kubelet[3683]: E0114 00:05:14.386114 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgz55" podUID="f0601279-098f-420b-84a8-b4028d2c0ea2" Jan 14 00:05:17.382135 kubelet[3683]: E0114 00:05:17.382084 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pdm6c" podUID="dd23407e-e7fa-43bd-b827-67d8fab88d3b" Jan 14 00:05:18.383798 kubelet[3683]: E0114 00:05:18.383753 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79cf4bbcf4-jhdlg" podUID="52895973-a9d8-41ff-890b-151c819ea908" Jan 14 00:05:19.382661 kubelet[3683]: E0114 00:05:19.382621 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-t7bd6" podUID="52b59e1e-92be-4298-96f1-1e43387d21fa" Jan 14 00:05:20.382826 kubelet[3683]: E0114 00:05:20.382567 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-vxthm" podUID="23297b6d-ba28-4d3e-a11a-c82aeab97bbe" Jan 14 00:05:24.390197 containerd[2133]: time="2026-01-14T00:05:24.389318319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 00:05:24.662542 containerd[2133]: time="2026-01-14T00:05:24.662346928Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:05:24.665924 containerd[2133]: time="2026-01-14T00:05:24.665883597Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 00:05:24.666118 containerd[2133]: time="2026-01-14T00:05:24.665931869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 00:05:24.668326 kubelet[3683]: E0114 00:05:24.668260 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:05:24.668326 kubelet[3683]: E0114 00:05:24.668327 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:05:24.668644 kubelet[3683]: E0114 00:05:24.668437 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2c5ec5d7d72d44948e0cbef03a4dbb30,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wdl2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67f75d98f9-d4x2n_calico-system(ef75aa50-7d3a-4b9a-95a7-c344bbb8239a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 00:05:24.672288 containerd[2133]: time="2026-01-14T00:05:24.670721202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 00:05:24.943359 containerd[2133]: time="2026-01-14T00:05:24.942862809Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:05:24.946891 containerd[2133]: time="2026-01-14T00:05:24.946840654Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 00:05:24.947024 containerd[2133]: time="2026-01-14T00:05:24.946866959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 00:05:24.948332 kubelet[3683]: E0114 00:05:24.948294 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:05:24.948407 kubelet[3683]: E0114 00:05:24.948345 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:05:24.948469 kubelet[3683]: E0114 00:05:24.948441 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdl2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67f75d98f9-d4x2n_calico-system(ef75aa50-7d3a-4b9a-95a7-c344bbb8239a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 00:05:24.949870 kubelet[3683]: E0114 00:05:24.949814 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67f75d98f9-d4x2n" podUID="ef75aa50-7d3a-4b9a-95a7-c344bbb8239a" Jan 14 00:05:28.384652 containerd[2133]: time="2026-01-14T00:05:28.383575399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 00:05:28.709953 containerd[2133]: time="2026-01-14T00:05:28.709706863Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:05:28.713526 containerd[2133]: time="2026-01-14T00:05:28.713478037Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 00:05:28.713772 containerd[2133]: time="2026-01-14T00:05:28.713489293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 00:05:28.714057 kubelet[3683]: E0114 00:05:28.713911 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:05:28.714057 kubelet[3683]: E0114 00:05:28.713958 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:05:28.714748 kubelet[3683]: E0114 00:05:28.714368 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hhqbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hgz55_calico-system(f0601279-098f-420b-84a8-b4028d2c0ea2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 00:05:28.717502 containerd[2133]: time="2026-01-14T00:05:28.717049903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 00:05:29.157718 containerd[2133]: time="2026-01-14T00:05:29.157665937Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:05:29.162184 containerd[2133]: time="2026-01-14T00:05:29.161974105Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 00:05:29.162184 containerd[2133]: time="2026-01-14T00:05:29.162058866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 00:05:29.162330 kubelet[3683]: E0114 00:05:29.162280 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:05:29.162432 kubelet[3683]: E0114 00:05:29.162341 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:05:29.162929 kubelet[3683]: E0114 00:05:29.162436 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hhqbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hgz55_calico-system(f0601279-098f-420b-84a8-b4028d2c0ea2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 00:05:29.163871 kubelet[3683]: E0114 00:05:29.163841 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgz55" podUID="f0601279-098f-420b-84a8-b4028d2c0ea2" Jan 14 00:05:30.383561 containerd[2133]: time="2026-01-14T00:05:30.382983792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 00:05:30.657857 containerd[2133]: time="2026-01-14T00:05:30.657666276Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:05:30.661431 containerd[2133]: time="2026-01-14T00:05:30.661378529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 00:05:30.661690 containerd[2133]: time="2026-01-14T00:05:30.661625710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 00:05:30.661967 kubelet[3683]: E0114 00:05:30.661922 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:05:30.662630 kubelet[3683]: E0114 00:05:30.662150 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:05:30.662630 kubelet[3683]: E0114 00:05:30.662400 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ttqb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79cf4bbcf4-jhdlg_calico-system(52895973-a9d8-41ff-890b-151c819ea908): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 00:05:30.663830 kubelet[3683]: E0114 00:05:30.663801 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79cf4bbcf4-jhdlg" podUID="52895973-a9d8-41ff-890b-151c819ea908" Jan 14 00:05:32.385561 containerd[2133]: time="2026-01-14T00:05:32.385308072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 00:05:32.650187 containerd[2133]: time="2026-01-14T00:05:32.649984314Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:05:32.653474 containerd[2133]: time="2026-01-14T00:05:32.653433370Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 00:05:32.653689 containerd[2133]: time="2026-01-14T00:05:32.653460250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 00:05:32.653865 kubelet[3683]: E0114 00:05:32.653784 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:05:32.653865 kubelet[3683]: E0114 00:05:32.653833 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:05:32.655481 kubelet[3683]: E0114 00:05:32.653937 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xl4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pdm6c_calico-system(dd23407e-e7fa-43bd-b827-67d8fab88d3b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 00:05:32.655481 kubelet[3683]: E0114 00:05:32.655187 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pdm6c" podUID="dd23407e-e7fa-43bd-b827-67d8fab88d3b" Jan 14 00:05:34.383871 containerd[2133]: time="2026-01-14T00:05:34.383832351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:05:34.626362 containerd[2133]: time="2026-01-14T00:05:34.626297089Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:05:34.630138 containerd[2133]: time="2026-01-14T00:05:34.630082903Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:05:34.630281 containerd[2133]: time="2026-01-14T00:05:34.630182816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:05:34.631020 kubelet[3683]: E0114 00:05:34.630971 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:05:34.631355 kubelet[3683]: E0114 00:05:34.631037 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:05:34.631395 kubelet[3683]: E0114 00:05:34.631356 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sr9bj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-766dfc88bb-t7bd6_calico-apiserver(52b59e1e-92be-4298-96f1-1e43387d21fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:05:34.632603 kubelet[3683]: E0114 00:05:34.632562 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-t7bd6" podUID="52b59e1e-92be-4298-96f1-1e43387d21fa" Jan 14 00:05:35.383249 containerd[2133]: time="2026-01-14T00:05:35.382342674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:05:35.629202 containerd[2133]: time="2026-01-14T00:05:35.629121045Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:05:35.633088 containerd[2133]: time="2026-01-14T00:05:35.632967756Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:05:35.633088 containerd[2133]: time="2026-01-14T00:05:35.633025445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:05:35.633471 kubelet[3683]: E0114 00:05:35.633204 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:05:35.633471 kubelet[3683]: E0114 00:05:35.633251 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:05:35.633471 kubelet[3683]: E0114 00:05:35.633353 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxtgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-766dfc88bb-vxthm_calico-apiserver(23297b6d-ba28-4d3e-a11a-c82aeab97bbe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:05:35.634763 kubelet[3683]: E0114 00:05:35.634717 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-vxthm" podUID="23297b6d-ba28-4d3e-a11a-c82aeab97bbe" Jan 14 00:05:36.384734 kubelet[3683]: E0114 00:05:36.384673 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67f75d98f9-d4x2n" podUID="ef75aa50-7d3a-4b9a-95a7-c344bbb8239a" Jan 14 00:05:40.383884 kubelet[3683]: E0114 00:05:40.383835 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgz55" podUID="f0601279-098f-420b-84a8-b4028d2c0ea2" Jan 14 00:05:45.382260 kubelet[3683]: E0114 00:05:45.382209 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79cf4bbcf4-jhdlg" podUID="52895973-a9d8-41ff-890b-151c819ea908" Jan 14 00:05:46.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.29:22-10.200.16.10:39394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:46.940398 systemd[1]: Started sshd@7-10.200.20.29:22-10.200.16.10:39394.service - OpenSSH per-connection server daemon (10.200.16.10:39394). Jan 14 00:05:46.946228 kernel: kauditd_printk_skb: 105 callbacks suppressed Jan 14 00:05:46.946384 kernel: audit: type=1130 audit(1768349146.939:762): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.29:22-10.200.16.10:39394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:47.361000 audit[5759]: USER_ACCT pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:47.379675 sshd[5759]: Accepted publickey for core from 10.200.16.10 port 39394 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:05:47.379264 sshd-session[5759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:05:47.377000 audit[5759]: CRED_ACQ pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:47.398609 kernel: audit: type=1101 audit(1768349147.361:763): pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:47.398700 kernel: audit: type=1103 audit(1768349147.377:764): pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:47.398718 kubelet[3683]: E0114 00:05:47.383794 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-t7bd6" podUID="52b59e1e-92be-4298-96f1-1e43387d21fa" Jan 14 00:05:47.398718 kubelet[3683]: E0114 00:05:47.384140 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pdm6c" podUID="dd23407e-e7fa-43bd-b827-67d8fab88d3b" Jan 14 00:05:47.398718 kubelet[3683]: E0114 00:05:47.384229 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-vxthm" podUID="23297b6d-ba28-4d3e-a11a-c82aeab97bbe" Jan 14 00:05:47.413202 kernel: audit: type=1006 audit(1768349147.377:765): pid=5759 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 14 00:05:47.406477 systemd-logind[2105]: New session 11 of user core. Jan 14 00:05:47.377000 audit[5759]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdb94cc00 a2=3 a3=0 items=0 ppid=1 pid=5759 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:05:47.434268 kernel: audit: type=1300 audit(1768349147.377:765): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdb94cc00 a2=3 a3=0 items=0 ppid=1 pid=5759 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:05:47.414444 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 00:05:47.377000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:05:47.442171 kernel: audit: type=1327 audit(1768349147.377:765): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:05:47.433000 audit[5759]: USER_START pid=5759 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:47.466981 kernel: audit: type=1105 audit(1768349147.433:766): pid=5759 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:47.443000 audit[5763]: CRED_ACQ pid=5763 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:47.483045 kernel: audit: type=1103 audit(1768349147.443:767): pid=5763 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:47.658409 sshd[5763]: Connection closed by 10.200.16.10 port 39394 Jan 14 00:05:47.659217 sshd-session[5759]: pam_unix(sshd:session): session closed for user core Jan 14 00:05:47.659000 audit[5759]: USER_END pid=5759 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:47.665448 systemd[1]: sshd@7-10.200.20.29:22-10.200.16.10:39394.service: Deactivated successfully. Jan 14 00:05:47.670595 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 00:05:47.672770 systemd-logind[2105]: Session 11 logged out. Waiting for processes to exit. Jan 14 00:05:47.674668 systemd-logind[2105]: Removed session 11. Jan 14 00:05:47.659000 audit[5759]: CRED_DISP pid=5759 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:47.696944 kernel: audit: type=1106 audit(1768349147.659:768): pid=5759 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:47.697086 kernel: audit: type=1104 audit(1768349147.659:769): pid=5759 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:47.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.29:22-10.200.16.10:39394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:51.382415 kubelet[3683]: E0114 00:05:51.382311 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67f75d98f9-d4x2n" podUID="ef75aa50-7d3a-4b9a-95a7-c344bbb8239a" Jan 14 00:05:52.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.29:22-10.200.16.10:41538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:52.751381 systemd[1]: Started sshd@8-10.200.20.29:22-10.200.16.10:41538.service - OpenSSH per-connection server daemon (10.200.16.10:41538). Jan 14 00:05:52.754507 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:05:52.754611 kernel: audit: type=1130 audit(1768349152.750:771): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.29:22-10.200.16.10:41538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:53.193000 audit[5781]: USER_ACCT pid=5781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:53.196354 sshd[5781]: Accepted publickey for core from 10.200.16.10 port 41538 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:05:53.212245 sshd-session[5781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:05:53.210000 audit[5781]: CRED_ACQ pid=5781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:53.231753 kernel: audit: type=1101 audit(1768349153.193:772): pid=5781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:53.231876 kernel: audit: type=1103 audit(1768349153.210:773): pid=5781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:53.242382 kernel: audit: type=1006 audit(1768349153.210:774): pid=5781 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 14 00:05:53.210000 audit[5781]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe27e2ac0 a2=3 a3=0 items=0 ppid=1 pid=5781 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:05:53.260057 kernel: audit: type=1300 audit(1768349153.210:774): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe27e2ac0 a2=3 a3=0 items=0 ppid=1 pid=5781 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:05:53.261553 systemd-logind[2105]: New session 12 of user core. Jan 14 00:05:53.210000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:05:53.268361 kernel: audit: type=1327 audit(1768349153.210:774): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:05:53.270581 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 00:05:53.274000 audit[5781]: USER_START pid=5781 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:53.295000 audit[5785]: CRED_ACQ pid=5785 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:53.311080 kernel: audit: type=1105 audit(1768349153.274:775): pid=5781 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:53.311218 kernel: audit: type=1103 audit(1768349153.295:776): pid=5785 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:53.522634 sshd[5785]: Connection closed by 10.200.16.10 port 41538 Jan 14 00:05:53.524736 sshd-session[5781]: pam_unix(sshd:session): session closed for user core Jan 14 00:05:53.524000 audit[5781]: USER_END pid=5781 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:53.529313 systemd[1]: sshd@8-10.200.20.29:22-10.200.16.10:41538.service: Deactivated successfully. Jan 14 00:05:53.532494 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 00:05:53.546531 systemd-logind[2105]: Session 12 logged out. Waiting for processes to exit. Jan 14 00:05:53.547748 systemd-logind[2105]: Removed session 12. Jan 14 00:05:53.552190 kernel: audit: type=1106 audit(1768349153.524:777): pid=5781 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:53.552339 kernel: audit: type=1104 audit(1768349153.525:778): pid=5781 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:53.525000 audit[5781]: CRED_DISP pid=5781 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:53.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.29:22-10.200.16.10:41538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:55.382538 kubelet[3683]: E0114 00:05:55.382478 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgz55" podUID="f0601279-098f-420b-84a8-b4028d2c0ea2" Jan 14 00:05:58.382907 kubelet[3683]: E0114 00:05:58.382554 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pdm6c" podUID="dd23407e-e7fa-43bd-b827-67d8fab88d3b" Jan 14 00:05:58.619201 systemd[1]: Started sshd@9-10.200.20.29:22-10.200.16.10:41550.service - OpenSSH per-connection server daemon (10.200.16.10:41550). Jan 14 00:05:58.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.29:22-10.200.16.10:41550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:58.622874 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:05:58.622964 kernel: audit: type=1130 audit(1768349158.618:780): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.29:22-10.200.16.10:41550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:59.074000 audit[5799]: USER_ACCT pid=5799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:59.075948 sshd[5799]: Accepted publickey for core from 10.200.16.10 port 41550 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:05:59.094693 kernel: audit: type=1101 audit(1768349159.074:781): pid=5799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:59.094827 kernel: audit: type=1103 audit(1768349159.092:782): pid=5799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:59.092000 audit[5799]: CRED_ACQ pid=5799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:59.095453 sshd-session[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:05:59.118135 systemd-logind[2105]: New session 13 of user core. Jan 14 00:05:59.119946 kernel: audit: type=1006 audit(1768349159.092:783): pid=5799 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 14 00:05:59.121275 kernel: audit: type=1300 audit(1768349159.092:783): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffff200710 a2=3 a3=0 items=0 ppid=1 pid=5799 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:05:59.092000 audit[5799]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffff200710 a2=3 a3=0 items=0 ppid=1 pid=5799 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:05:59.092000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:05:59.143539 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 00:05:59.144705 kernel: audit: type=1327 audit(1768349159.092:783): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:05:59.147000 audit[5799]: USER_START pid=5799 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:59.169000 audit[5803]: CRED_ACQ pid=5803 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:59.184653 kernel: audit: type=1105 audit(1768349159.147:784): pid=5799 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:59.184978 kernel: audit: type=1103 audit(1768349159.169:785): pid=5803 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:59.377293 sshd[5803]: Connection closed by 10.200.16.10 port 41550 Jan 14 00:05:59.377747 sshd-session[5799]: pam_unix(sshd:session): session closed for user core Jan 14 00:05:59.377000 audit[5799]: USER_END pid=5799 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:59.382604 systemd[1]: sshd@9-10.200.20.29:22-10.200.16.10:41550.service: Deactivated successfully. Jan 14 00:05:59.388830 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 00:05:59.391347 systemd-logind[2105]: Session 13 logged out. Waiting for processes to exit. Jan 14 00:05:59.396265 systemd-logind[2105]: Removed session 13. Jan 14 00:05:59.377000 audit[5799]: CRED_DISP pid=5799 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:59.399919 kubelet[3683]: E0114 00:05:59.399569 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-vxthm" podUID="23297b6d-ba28-4d3e-a11a-c82aeab97bbe" Jan 14 00:05:59.414190 kernel: audit: type=1106 audit(1768349159.377:786): pid=5799 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:59.414315 kernel: audit: type=1104 audit(1768349159.377:787): pid=5799 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:59.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.29:22-10.200.16.10:41550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:59.459949 systemd[1]: Started sshd@10-10.200.20.29:22-10.200.16.10:41562.service - OpenSSH per-connection server daemon (10.200.16.10:41562). Jan 14 00:05:59.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.29:22-10.200.16.10:41562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:05:59.859191 sshd[5815]: Accepted publickey for core from 10.200.16.10 port 41562 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:05:59.857000 audit[5815]: USER_ACCT pid=5815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:59.860000 audit[5815]: CRED_ACQ pid=5815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:59.860000 audit[5815]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeee102b0 a2=3 a3=0 items=0 ppid=1 pid=5815 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:05:59.860000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:05:59.862814 sshd-session[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:05:59.870027 systemd-logind[2105]: New session 14 of user core. Jan 14 00:05:59.875544 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 00:05:59.877000 audit[5815]: USER_START pid=5815 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:05:59.880000 audit[5819]: CRED_ACQ pid=5819 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:00.197864 sshd[5819]: Connection closed by 10.200.16.10 port 41562 Jan 14 00:06:00.200505 sshd-session[5815]: pam_unix(sshd:session): session closed for user core Jan 14 00:06:00.201000 audit[5815]: USER_END pid=5815 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:00.201000 audit[5815]: CRED_DISP pid=5815 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:00.206413 systemd-logind[2105]: Session 14 logged out. Waiting for processes to exit. Jan 14 00:06:00.206958 systemd[1]: sshd@10-10.200.20.29:22-10.200.16.10:41562.service: Deactivated successfully. Jan 14 00:06:00.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.29:22-10.200.16.10:41562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:00.210620 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 00:06:00.212772 systemd-logind[2105]: Removed session 14. Jan 14 00:06:00.289727 systemd[1]: Started sshd@11-10.200.20.29:22-10.200.16.10:60812.service - OpenSSH per-connection server daemon (10.200.16.10:60812). Jan 14 00:06:00.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.29:22-10.200.16.10:60812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:00.382464 kubelet[3683]: E0114 00:06:00.382419 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79cf4bbcf4-jhdlg" podUID="52895973-a9d8-41ff-890b-151c819ea908" Jan 14 00:06:00.709000 audit[5829]: USER_ACCT pid=5829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:00.710798 sshd[5829]: Accepted publickey for core from 10.200.16.10 port 60812 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:06:00.710000 audit[5829]: CRED_ACQ pid=5829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:00.710000 audit[5829]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe55fe170 a2=3 a3=0 items=0 ppid=1 pid=5829 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:00.710000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:00.712500 sshd-session[5829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:00.716948 systemd-logind[2105]: New session 15 of user core. Jan 14 00:06:00.721338 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 00:06:00.723000 audit[5829]: USER_START pid=5829 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:00.724000 audit[5833]: CRED_ACQ pid=5833 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:00.991096 sshd[5833]: Connection closed by 10.200.16.10 port 60812 Jan 14 00:06:00.991817 sshd-session[5829]: pam_unix(sshd:session): session closed for user core Jan 14 00:06:00.993000 audit[5829]: USER_END pid=5829 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:00.994000 audit[5829]: CRED_DISP pid=5829 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:00.997977 systemd[1]: sshd@11-10.200.20.29:22-10.200.16.10:60812.service: Deactivated successfully. Jan 14 00:06:00.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.29:22-10.200.16.10:60812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:01.000591 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 00:06:01.003504 systemd-logind[2105]: Session 15 logged out. Waiting for processes to exit. Jan 14 00:06:01.005549 systemd-logind[2105]: Removed session 15. Jan 14 00:06:02.383191 kubelet[3683]: E0114 00:06:02.382977 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-t7bd6" podUID="52b59e1e-92be-4298-96f1-1e43387d21fa" Jan 14 00:06:05.383638 containerd[2133]: time="2026-01-14T00:06:05.382829494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 00:06:05.639082 containerd[2133]: time="2026-01-14T00:06:05.638820308Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:06:05.642331 containerd[2133]: time="2026-01-14T00:06:05.642223880Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 00:06:05.642331 containerd[2133]: time="2026-01-14T00:06:05.642292410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 00:06:05.642485 kubelet[3683]: E0114 00:06:05.642447 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:06:05.643418 kubelet[3683]: E0114 00:06:05.642495 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:06:05.643418 kubelet[3683]: E0114 00:06:05.642591 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2c5ec5d7d72d44948e0cbef03a4dbb30,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wdl2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67f75d98f9-d4x2n_calico-system(ef75aa50-7d3a-4b9a-95a7-c344bbb8239a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 00:06:05.645472 containerd[2133]: time="2026-01-14T00:06:05.645417801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 00:06:05.927806 containerd[2133]: time="2026-01-14T00:06:05.927663391Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:06:05.932809 containerd[2133]: time="2026-01-14T00:06:05.932702216Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 00:06:05.932809 containerd[2133]: time="2026-01-14T00:06:05.932763585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 00:06:05.933305 kubelet[3683]: E0114 00:06:05.933093 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:06:05.933305 kubelet[3683]: E0114 00:06:05.933142 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:06:05.934447 kubelet[3683]: E0114 00:06:05.934396 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdl2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67f75d98f9-d4x2n_calico-system(ef75aa50-7d3a-4b9a-95a7-c344bbb8239a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 00:06:05.935668 kubelet[3683]: E0114 00:06:05.935616 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67f75d98f9-d4x2n" podUID="ef75aa50-7d3a-4b9a-95a7-c344bbb8239a" Jan 14 00:06:06.085335 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 00:06:06.085520 kernel: audit: type=1130 audit(1768349166.079:807): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.29:22-10.200.16.10:60822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.29:22-10.200.16.10:60822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:06.080629 systemd[1]: Started sshd@12-10.200.20.29:22-10.200.16.10:60822.service - OpenSSH per-connection server daemon (10.200.16.10:60822). Jan 14 00:06:06.516000 audit[5853]: USER_ACCT pid=5853 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:06.534848 sshd[5853]: Accepted publickey for core from 10.200.16.10 port 60822 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:06:06.536459 sshd-session[5853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:06.534000 audit[5853]: CRED_ACQ pid=5853 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:06.551282 kernel: audit: type=1101 audit(1768349166.516:808): pid=5853 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:06.551408 kernel: audit: type=1103 audit(1768349166.534:809): pid=5853 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:06.561811 kernel: audit: type=1006 audit(1768349166.534:810): pid=5853 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 14 00:06:06.534000 audit[5853]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffe808560 a2=3 a3=0 items=0 ppid=1 pid=5853 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:06.570223 systemd-logind[2105]: New session 16 of user core. Jan 14 00:06:06.578940 kernel: audit: type=1300 audit(1768349166.534:810): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffe808560 a2=3 a3=0 items=0 ppid=1 pid=5853 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:06.534000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:06.586647 kernel: audit: type=1327 audit(1768349166.534:810): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:06.591279 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 00:06:06.595000 audit[5853]: USER_START pid=5853 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:06.598000 audit[5857]: CRED_ACQ pid=5857 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:06.638117 kernel: audit: type=1105 audit(1768349166.595:811): pid=5853 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:06.638266 kernel: audit: type=1103 audit(1768349166.598:812): pid=5857 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:06.864206 sshd[5857]: Connection closed by 10.200.16.10 port 60822 Jan 14 00:06:06.864228 sshd-session[5853]: pam_unix(sshd:session): session closed for user core Jan 14 00:06:06.866000 audit[5853]: USER_END pid=5853 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:06.875790 systemd[1]: sshd@12-10.200.20.29:22-10.200.16.10:60822.service: Deactivated successfully. Jan 14 00:06:06.878406 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 00:06:06.880062 systemd-logind[2105]: Session 16 logged out. Waiting for processes to exit. Jan 14 00:06:06.881477 systemd-logind[2105]: Removed session 16. Jan 14 00:06:06.871000 audit[5853]: CRED_DISP pid=5853 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:06.902441 kernel: audit: type=1106 audit(1768349166.866:813): pid=5853 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:06.902559 kernel: audit: type=1104 audit(1768349166.871:814): pid=5853 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:06.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.29:22-10.200.16.10:60822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:09.381613 containerd[2133]: time="2026-01-14T00:06:09.381570596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 00:06:09.692741 containerd[2133]: time="2026-01-14T00:06:09.692688014Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:06:09.696642 containerd[2133]: time="2026-01-14T00:06:09.696600240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 00:06:09.696937 containerd[2133]: time="2026-01-14T00:06:09.696764347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 00:06:09.697185 kubelet[3683]: E0114 00:06:09.697127 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:06:09.697494 kubelet[3683]: E0114 00:06:09.697336 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:06:09.697700 kubelet[3683]: E0114 00:06:09.697578 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hhqbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hgz55_calico-system(f0601279-098f-420b-84a8-b4028d2c0ea2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 00:06:09.699628 containerd[2133]: time="2026-01-14T00:06:09.699606795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 00:06:10.005877 containerd[2133]: time="2026-01-14T00:06:10.005429050Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:06:10.008833 containerd[2133]: time="2026-01-14T00:06:10.008790660Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 00:06:10.008920 containerd[2133]: time="2026-01-14T00:06:10.008876237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 00:06:10.009151 kubelet[3683]: E0114 00:06:10.009102 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:06:10.009239 kubelet[3683]: E0114 00:06:10.009152 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:06:10.009292 kubelet[3683]: E0114 00:06:10.009257 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hhqbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hgz55_calico-system(f0601279-098f-420b-84a8-b4028d2c0ea2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 00:06:10.010550 kubelet[3683]: E0114 00:06:10.010511 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgz55" podUID="f0601279-098f-420b-84a8-b4028d2c0ea2" Jan 14 00:06:10.382547 kubelet[3683]: E0114 00:06:10.382301 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-vxthm" podUID="23297b6d-ba28-4d3e-a11a-c82aeab97bbe" Jan 14 00:06:11.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.29:22-10.200.16.10:45624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:11.952803 systemd[1]: Started sshd@13-10.200.20.29:22-10.200.16.10:45624.service - OpenSSH per-connection server daemon (10.200.16.10:45624). Jan 14 00:06:11.956149 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:06:11.956565 kernel: audit: type=1130 audit(1768349171.951:816): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.29:22-10.200.16.10:45624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:12.402000 audit[5881]: USER_ACCT pid=5881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:12.419529 sshd[5881]: Accepted publickey for core from 10.200.16.10 port 45624 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:06:12.421619 sshd-session[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:12.419000 audit[5881]: CRED_ACQ pid=5881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:12.436561 kernel: audit: type=1101 audit(1768349172.402:817): pid=5881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:12.436659 kernel: audit: type=1103 audit(1768349172.419:818): pid=5881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:12.446813 kernel: audit: type=1006 audit(1768349172.419:819): pid=5881 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 14 00:06:12.419000 audit[5881]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd815c0b0 a2=3 a3=0 items=0 ppid=1 pid=5881 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:12.464612 kernel: audit: type=1300 audit(1768349172.419:819): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd815c0b0 a2=3 a3=0 items=0 ppid=1 pid=5881 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:12.419000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:12.471106 systemd-logind[2105]: New session 17 of user core. Jan 14 00:06:12.472315 kernel: audit: type=1327 audit(1768349172.419:819): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:12.477372 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 00:06:12.479000 audit[5881]: USER_START pid=5881 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:12.481000 audit[5885]: CRED_ACQ pid=5885 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:12.514452 kernel: audit: type=1105 audit(1768349172.479:820): pid=5881 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:12.514556 kernel: audit: type=1103 audit(1768349172.481:821): pid=5885 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:12.697972 sshd[5885]: Connection closed by 10.200.16.10 port 45624 Jan 14 00:06:12.697783 sshd-session[5881]: pam_unix(sshd:session): session closed for user core Jan 14 00:06:12.700000 audit[5881]: USER_END pid=5881 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:12.700000 audit[5881]: CRED_DISP pid=5881 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:12.723683 systemd[1]: sshd@13-10.200.20.29:22-10.200.16.10:45624.service: Deactivated successfully. Jan 14 00:06:12.727592 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 00:06:12.732812 systemd-logind[2105]: Session 17 logged out. Waiting for processes to exit. Jan 14 00:06:12.736952 systemd-logind[2105]: Removed session 17. Jan 14 00:06:12.738001 kernel: audit: type=1106 audit(1768349172.700:822): pid=5881 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:12.738066 kernel: audit: type=1104 audit(1768349172.700:823): pid=5881 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:12.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.29:22-10.200.16.10:45624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:13.385183 containerd[2133]: time="2026-01-14T00:06:13.385014565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 00:06:13.653686 containerd[2133]: time="2026-01-14T00:06:13.653644117Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:06:13.657258 containerd[2133]: time="2026-01-14T00:06:13.657221602Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 00:06:13.657361 containerd[2133]: time="2026-01-14T00:06:13.657245994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 00:06:13.657476 kubelet[3683]: E0114 00:06:13.657437 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:06:13.657745 kubelet[3683]: E0114 00:06:13.657487 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:06:13.657745 kubelet[3683]: E0114 00:06:13.657690 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xl4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pdm6c_calico-system(dd23407e-e7fa-43bd-b827-67d8fab88d3b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 00:06:13.658431 containerd[2133]: time="2026-01-14T00:06:13.658144690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 00:06:13.659319 kubelet[3683]: E0114 00:06:13.659290 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pdm6c" podUID="dd23407e-e7fa-43bd-b827-67d8fab88d3b" Jan 14 00:06:13.940970 containerd[2133]: time="2026-01-14T00:06:13.940492835Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:06:13.944301 containerd[2133]: time="2026-01-14T00:06:13.944253859Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 00:06:13.944350 containerd[2133]: time="2026-01-14T00:06:13.944308108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 00:06:13.944596 kubelet[3683]: E0114 00:06:13.944554 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:06:13.944631 kubelet[3683]: E0114 00:06:13.944622 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:06:13.945070 kubelet[3683]: E0114 00:06:13.944744 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ttqb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79cf4bbcf4-jhdlg_calico-system(52895973-a9d8-41ff-890b-151c819ea908): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 00:06:13.945965 kubelet[3683]: E0114 00:06:13.945919 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79cf4bbcf4-jhdlg" podUID="52895973-a9d8-41ff-890b-151c819ea908" Jan 14 00:06:14.384119 kubelet[3683]: E0114 00:06:14.383447 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-t7bd6" podUID="52b59e1e-92be-4298-96f1-1e43387d21fa" Jan 14 00:06:17.798864 systemd[1]: Started sshd@14-10.200.20.29:22-10.200.16.10:45636.service - OpenSSH per-connection server daemon (10.200.16.10:45636). Jan 14 00:06:17.818205 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:06:17.818308 kernel: audit: type=1130 audit(1768349177.797:825): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.29:22-10.200.16.10:45636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:17.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.29:22-10.200.16.10:45636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:18.209000 audit[5920]: USER_ACCT pid=5920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:18.211424 sshd[5920]: Accepted publickey for core from 10.200.16.10 port 45636 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:06:18.228736 sshd-session[5920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:18.238249 systemd-logind[2105]: New session 18 of user core. Jan 14 00:06:18.225000 audit[5920]: CRED_ACQ pid=5920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:18.241345 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 00:06:18.255632 kernel: audit: type=1101 audit(1768349178.209:826): pid=5920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:18.255715 kernel: audit: type=1103 audit(1768349178.225:827): pid=5920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:18.267469 kernel: audit: type=1006 audit(1768349178.225:828): pid=5920 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 14 00:06:18.225000 audit[5920]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe39dd330 a2=3 a3=0 items=0 ppid=1 pid=5920 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:18.284279 kernel: audit: type=1300 audit(1768349178.225:828): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe39dd330 a2=3 a3=0 items=0 ppid=1 pid=5920 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:18.225000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:18.292911 kernel: audit: type=1327 audit(1768349178.225:828): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:18.257000 audit[5920]: USER_START pid=5920 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:18.312357 kernel: audit: type=1105 audit(1768349178.257:829): pid=5920 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:18.257000 audit[5924]: CRED_ACQ pid=5924 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:18.327899 kernel: audit: type=1103 audit(1768349178.257:830): pid=5924 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:18.384098 kubelet[3683]: E0114 00:06:18.383794 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67f75d98f9-d4x2n" podUID="ef75aa50-7d3a-4b9a-95a7-c344bbb8239a" Jan 14 00:06:18.488124 sshd[5924]: Connection closed by 10.200.16.10 port 45636 Jan 14 00:06:18.487416 sshd-session[5920]: pam_unix(sshd:session): session closed for user core Jan 14 00:06:18.487000 audit[5920]: USER_END pid=5920 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:18.492144 systemd-logind[2105]: Session 18 logged out. Waiting for processes to exit. Jan 14 00:06:18.492730 systemd[1]: sshd@14-10.200.20.29:22-10.200.16.10:45636.service: Deactivated successfully. Jan 14 00:06:18.495960 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 00:06:18.499591 systemd-logind[2105]: Removed session 18. Jan 14 00:06:18.487000 audit[5920]: CRED_DISP pid=5920 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:18.522395 kernel: audit: type=1106 audit(1768349178.487:831): pid=5920 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:18.522533 kernel: audit: type=1104 audit(1768349178.487:832): pid=5920 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:18.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.29:22-10.200.16.10:45636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:18.580182 systemd[1]: Started sshd@15-10.200.20.29:22-10.200.16.10:45650.service - OpenSSH per-connection server daemon (10.200.16.10:45650). Jan 14 00:06:18.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.29:22-10.200.16.10:45650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:19.002000 audit[5936]: USER_ACCT pid=5936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:19.003600 sshd[5936]: Accepted publickey for core from 10.200.16.10 port 45650 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:06:19.003000 audit[5936]: CRED_ACQ pid=5936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:19.003000 audit[5936]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff2d42080 a2=3 a3=0 items=0 ppid=1 pid=5936 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:19.003000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:19.005080 sshd-session[5936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:19.009212 systemd-logind[2105]: New session 19 of user core. Jan 14 00:06:19.014717 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 00:06:19.017000 audit[5936]: USER_START pid=5936 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:19.018000 audit[5940]: CRED_ACQ pid=5940 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:19.413870 sshd[5940]: Connection closed by 10.200.16.10 port 45650 Jan 14 00:06:19.414366 sshd-session[5936]: pam_unix(sshd:session): session closed for user core Jan 14 00:06:19.417000 audit[5936]: USER_END pid=5936 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:19.418000 audit[5936]: CRED_DISP pid=5936 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:19.422477 systemd[1]: sshd@15-10.200.20.29:22-10.200.16.10:45650.service: Deactivated successfully. Jan 14 00:06:19.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.29:22-10.200.16.10:45650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:19.424551 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 00:06:19.427360 systemd-logind[2105]: Session 19 logged out. Waiting for processes to exit. Jan 14 00:06:19.429645 systemd-logind[2105]: Removed session 19. Jan 14 00:06:19.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.29:22-10.200.16.10:53576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:19.504994 systemd[1]: Started sshd@16-10.200.20.29:22-10.200.16.10:53576.service - OpenSSH per-connection server daemon (10.200.16.10:53576). Jan 14 00:06:19.928000 audit[5957]: USER_ACCT pid=5957 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:19.930202 sshd[5957]: Accepted publickey for core from 10.200.16.10 port 53576 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:06:19.930000 audit[5957]: CRED_ACQ pid=5957 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:19.930000 audit[5957]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffda39fab0 a2=3 a3=0 items=0 ppid=1 pid=5957 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:19.930000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:19.931881 sshd-session[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:19.936363 systemd-logind[2105]: New session 20 of user core. Jan 14 00:06:19.941323 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 00:06:19.942000 audit[5957]: USER_START pid=5957 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:19.943000 audit[5961]: CRED_ACQ pid=5961 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:20.387866 kubelet[3683]: E0114 00:06:20.387498 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgz55" podUID="f0601279-098f-420b-84a8-b4028d2c0ea2" Jan 14 00:06:20.612000 audit[5979]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=5979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:06:20.612000 audit[5979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffeb4240f0 a2=0 a3=1 items=0 ppid=3789 pid=5979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:20.612000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:06:20.618000 audit[5979]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=5979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:06:20.618000 audit[5979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffeb4240f0 a2=0 a3=1 items=0 ppid=3789 pid=5979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:20.618000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:06:20.676192 sshd[5961]: Connection closed by 10.200.16.10 port 53576 Jan 14 00:06:20.675270 sshd-session[5957]: pam_unix(sshd:session): session closed for user core Jan 14 00:06:20.676000 audit[5957]: USER_END pid=5957 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:20.676000 audit[5957]: CRED_DISP pid=5957 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:20.681900 systemd-logind[2105]: Session 20 logged out. Waiting for processes to exit. Jan 14 00:06:20.682745 systemd[1]: sshd@16-10.200.20.29:22-10.200.16.10:53576.service: Deactivated successfully. Jan 14 00:06:20.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.29:22-10.200.16.10:53576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:20.688627 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 00:06:20.690872 systemd-logind[2105]: Removed session 20. Jan 14 00:06:20.698000 audit[5984]: NETFILTER_CFG table=filter:149 family=2 entries=38 op=nft_register_rule pid=5984 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:06:20.698000 audit[5984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd0911fd0 a2=0 a3=1 items=0 ppid=3789 pid=5984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:20.698000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:06:20.712000 audit[5984]: NETFILTER_CFG table=nat:150 family=2 entries=20 op=nft_register_rule pid=5984 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:06:20.712000 audit[5984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd0911fd0 a2=0 a3=1 items=0 ppid=3789 pid=5984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:20.712000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:06:20.763531 systemd[1]: Started sshd@17-10.200.20.29:22-10.200.16.10:53586.service - OpenSSH per-connection server daemon (10.200.16.10:53586). Jan 14 00:06:20.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.29:22-10.200.16.10:53586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:21.181000 audit[5986]: USER_ACCT pid=5986 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:21.182330 sshd[5986]: Accepted publickey for core from 10.200.16.10 port 53586 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:06:21.183000 audit[5986]: CRED_ACQ pid=5986 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:21.183000 audit[5986]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe0162260 a2=3 a3=0 items=0 ppid=1 pid=5986 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:21.183000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:21.185382 sshd-session[5986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:21.190486 systemd-logind[2105]: New session 21 of user core. Jan 14 00:06:21.198504 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 00:06:21.202000 audit[5986]: USER_START pid=5986 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:21.204000 audit[5990]: CRED_ACQ pid=5990 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:21.574322 sshd[5990]: Connection closed by 10.200.16.10 port 53586 Jan 14 00:06:21.574898 sshd-session[5986]: pam_unix(sshd:session): session closed for user core Jan 14 00:06:21.577000 audit[5986]: USER_END pid=5986 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:21.577000 audit[5986]: CRED_DISP pid=5986 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:21.581043 systemd[1]: sshd@17-10.200.20.29:22-10.200.16.10:53586.service: Deactivated successfully. Jan 14 00:06:21.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.29:22-10.200.16.10:53586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:21.583052 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 00:06:21.583862 systemd-logind[2105]: Session 21 logged out. Waiting for processes to exit. Jan 14 00:06:21.585859 systemd-logind[2105]: Removed session 21. Jan 14 00:06:21.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.29:22-10.200.16.10:53590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:21.666427 systemd[1]: Started sshd@18-10.200.20.29:22-10.200.16.10:53590.service - OpenSSH per-connection server daemon (10.200.16.10:53590). Jan 14 00:06:22.088000 audit[6000]: USER_ACCT pid=6000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:22.089000 audit[6000]: CRED_ACQ pid=6000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:22.089000 audit[6000]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd6a387c0 a2=3 a3=0 items=0 ppid=1 pid=6000 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:22.089000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:22.090681 sshd-session[6000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:22.092723 sshd[6000]: Accepted publickey for core from 10.200.16.10 port 53590 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:06:22.095319 systemd-logind[2105]: New session 22 of user core. Jan 14 00:06:22.102329 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 00:06:22.104000 audit[6000]: USER_START pid=6000 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:22.106000 audit[6004]: CRED_ACQ pid=6004 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:22.372944 sshd[6004]: Connection closed by 10.200.16.10 port 53590 Jan 14 00:06:22.373000 audit[6000]: USER_END pid=6000 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:22.373000 audit[6000]: CRED_DISP pid=6000 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:22.370609 sshd-session[6000]: pam_unix(sshd:session): session closed for user core Jan 14 00:06:22.376506 systemd[1]: sshd@18-10.200.20.29:22-10.200.16.10:53590.service: Deactivated successfully. Jan 14 00:06:22.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.29:22-10.200.16.10:53590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:22.382609 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 00:06:22.384985 systemd-logind[2105]: Session 22 logged out. Waiting for processes to exit. Jan 14 00:06:22.387557 systemd-logind[2105]: Removed session 22. Jan 14 00:06:25.383277 kubelet[3683]: E0114 00:06:25.383226 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pdm6c" podUID="dd23407e-e7fa-43bd-b827-67d8fab88d3b" Jan 14 00:06:25.384852 containerd[2133]: time="2026-01-14T00:06:25.384815347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:06:25.677037 containerd[2133]: time="2026-01-14T00:06:25.676836531Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:06:25.680384 containerd[2133]: time="2026-01-14T00:06:25.680274383Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:06:25.680384 containerd[2133]: time="2026-01-14T00:06:25.680330352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:06:25.680543 kubelet[3683]: E0114 00:06:25.680502 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:06:25.680587 kubelet[3683]: E0114 00:06:25.680552 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:06:25.680696 kubelet[3683]: E0114 00:06:25.680657 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxtgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-766dfc88bb-vxthm_calico-apiserver(23297b6d-ba28-4d3e-a11a-c82aeab97bbe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:06:25.682117 kubelet[3683]: E0114 00:06:25.682077 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-vxthm" podUID="23297b6d-ba28-4d3e-a11a-c82aeab97bbe" Jan 14 00:06:26.383013 kubelet[3683]: E0114 00:06:26.382918 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79cf4bbcf4-jhdlg" podUID="52895973-a9d8-41ff-890b-151c819ea908" Jan 14 00:06:26.439000 audit[6030]: NETFILTER_CFG table=filter:151 family=2 entries=26 op=nft_register_rule pid=6030 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:06:26.442905 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 14 00:06:26.442994 kernel: audit: type=1325 audit(1768349186.439:874): table=filter:151 family=2 entries=26 op=nft_register_rule pid=6030 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:06:26.439000 audit[6030]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffde718e90 a2=0 a3=1 items=0 ppid=3789 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:26.471215 kernel: audit: type=1300 audit(1768349186.439:874): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffde718e90 a2=0 a3=1 items=0 ppid=3789 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:26.439000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:06:26.480845 kernel: audit: type=1327 audit(1768349186.439:874): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:06:26.472000 audit[6030]: NETFILTER_CFG table=nat:152 family=2 entries=104 op=nft_register_chain pid=6030 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:06:26.472000 audit[6030]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffde718e90 a2=0 a3=1 items=0 ppid=3789 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:26.511202 kernel: audit: type=1325 audit(1768349186.472:875): table=nat:152 family=2 entries=104 op=nft_register_chain pid=6030 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:06:26.511318 kernel: audit: type=1300 audit(1768349186.472:875): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffde718e90 a2=0 a3=1 items=0 ppid=3789 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:26.472000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:06:26.521402 kernel: audit: type=1327 audit(1768349186.472:875): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:06:27.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.29:22-10.200.16.10:53600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:27.459403 systemd[1]: Started sshd@19-10.200.20.29:22-10.200.16.10:53600.service - OpenSSH per-connection server daemon (10.200.16.10:53600). Jan 14 00:06:27.478185 kernel: audit: type=1130 audit(1768349187.459:876): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.29:22-10.200.16.10:53600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:27.898000 audit[6032]: USER_ACCT pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:27.915781 sshd[6032]: Accepted publickey for core from 10.200.16.10 port 53600 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:06:27.917687 sshd-session[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:27.916000 audit[6032]: CRED_ACQ pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:27.932182 kernel: audit: type=1101 audit(1768349187.898:877): pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:27.932277 kernel: audit: type=1103 audit(1768349187.916:878): pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:27.941895 kernel: audit: type=1006 audit(1768349187.916:879): pid=6032 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 14 00:06:27.916000 audit[6032]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd7abbb0 a2=3 a3=0 items=0 ppid=1 pid=6032 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:27.916000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:27.946576 systemd-logind[2105]: New session 23 of user core. Jan 14 00:06:27.956350 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 00:06:27.958000 audit[6032]: USER_START pid=6032 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:27.961000 audit[6036]: CRED_ACQ pid=6036 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:28.190582 sshd[6036]: Connection closed by 10.200.16.10 port 53600 Jan 14 00:06:28.189786 sshd-session[6032]: pam_unix(sshd:session): session closed for user core Jan 14 00:06:28.191000 audit[6032]: USER_END pid=6032 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:28.191000 audit[6032]: CRED_DISP pid=6032 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:28.194504 systemd[1]: sshd@19-10.200.20.29:22-10.200.16.10:53600.service: Deactivated successfully. Jan 14 00:06:28.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.29:22-10.200.16.10:53600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:28.196597 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 00:06:28.198070 systemd-logind[2105]: Session 23 logged out. Waiting for processes to exit. Jan 14 00:06:28.198876 systemd-logind[2105]: Removed session 23. Jan 14 00:06:28.383150 containerd[2133]: time="2026-01-14T00:06:28.382891784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:06:28.620628 containerd[2133]: time="2026-01-14T00:06:28.620320991Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:06:28.623599 containerd[2133]: time="2026-01-14T00:06:28.623550270Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:06:28.623696 containerd[2133]: time="2026-01-14T00:06:28.623635304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:06:28.623904 kubelet[3683]: E0114 00:06:28.623851 3683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:06:28.624203 kubelet[3683]: E0114 00:06:28.623910 3683 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:06:28.624264 kubelet[3683]: E0114 00:06:28.624036 3683 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sr9bj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-766dfc88bb-t7bd6_calico-apiserver(52b59e1e-92be-4298-96f1-1e43387d21fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:06:28.625386 kubelet[3683]: E0114 00:06:28.625344 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-t7bd6" podUID="52b59e1e-92be-4298-96f1-1e43387d21fa" Jan 14 00:06:30.384453 kubelet[3683]: E0114 00:06:30.384398 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67f75d98f9-d4x2n" podUID="ef75aa50-7d3a-4b9a-95a7-c344bbb8239a" Jan 14 00:06:33.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.29:22-10.200.16.10:33694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:33.277119 systemd[1]: Started sshd@20-10.200.20.29:22-10.200.16.10:33694.service - OpenSSH per-connection server daemon (10.200.16.10:33694). Jan 14 00:06:33.280887 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 00:06:33.280938 kernel: audit: type=1130 audit(1768349193.276:885): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.29:22-10.200.16.10:33694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:33.383075 kubelet[3683]: E0114 00:06:33.383025 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgz55" podUID="f0601279-098f-420b-84a8-b4028d2c0ea2" Jan 14 00:06:33.712000 audit[6048]: USER_ACCT pid=6048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:33.732198 kernel: audit: type=1101 audit(1768349193.712:886): pid=6048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:33.732303 sshd[6048]: Accepted publickey for core from 10.200.16.10 port 33694 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:06:33.731000 audit[6048]: CRED_ACQ pid=6048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:33.738969 sshd-session[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:33.758679 kernel: audit: type=1103 audit(1768349193.731:887): pid=6048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:33.758784 kernel: audit: type=1006 audit(1768349193.737:888): pid=6048 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 14 00:06:33.737000 audit[6048]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe356c850 a2=3 a3=0 items=0 ppid=1 pid=6048 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:33.767481 systemd-logind[2105]: New session 24 of user core. Jan 14 00:06:33.777258 kernel: audit: type=1300 audit(1768349193.737:888): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe356c850 a2=3 a3=0 items=0 ppid=1 pid=6048 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:33.737000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:33.784702 kernel: audit: type=1327 audit(1768349193.737:888): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:33.785452 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 00:06:33.788000 audit[6048]: USER_START pid=6048 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:33.813000 audit[6052]: CRED_ACQ pid=6052 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:33.833932 kernel: audit: type=1105 audit(1768349193.788:889): pid=6048 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:33.834053 kernel: audit: type=1103 audit(1768349193.813:890): pid=6052 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:34.039786 sshd[6052]: Connection closed by 10.200.16.10 port 33694 Jan 14 00:06:34.040491 sshd-session[6048]: pam_unix(sshd:session): session closed for user core Jan 14 00:06:34.040000 audit[6048]: USER_END pid=6048 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:34.040000 audit[6048]: CRED_DISP pid=6048 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:34.065848 systemd[1]: sshd@20-10.200.20.29:22-10.200.16.10:33694.service: Deactivated successfully. Jan 14 00:06:34.067898 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 00:06:34.084128 kernel: audit: type=1106 audit(1768349194.040:891): pid=6048 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:34.084377 kernel: audit: type=1104 audit(1768349194.040:892): pid=6048 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:34.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.29:22-10.200.16.10:33694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:34.085754 systemd-logind[2105]: Session 24 logged out. Waiting for processes to exit. Jan 14 00:06:34.086763 systemd-logind[2105]: Removed session 24. Jan 14 00:06:38.383433 kubelet[3683]: E0114 00:06:38.383385 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pdm6c" podUID="dd23407e-e7fa-43bd-b827-67d8fab88d3b" Jan 14 00:06:39.133847 systemd[1]: Started sshd@21-10.200.20.29:22-10.200.16.10:33706.service - OpenSSH per-connection server daemon (10.200.16.10:33706). Jan 14 00:06:39.138099 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:06:39.138194 kernel: audit: type=1130 audit(1768349199.132:894): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.29:22-10.200.16.10:33706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:39.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.29:22-10.200.16.10:33706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:39.382092 kubelet[3683]: E0114 00:06:39.381205 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79cf4bbcf4-jhdlg" podUID="52895973-a9d8-41ff-890b-151c819ea908" Jan 14 00:06:39.382092 kubelet[3683]: E0114 00:06:39.381512 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-vxthm" podUID="23297b6d-ba28-4d3e-a11a-c82aeab97bbe" Jan 14 00:06:39.577000 audit[6064]: USER_ACCT pid=6064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:39.596188 sshd[6064]: Accepted publickey for core from 10.200.16.10 port 33706 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:06:39.593000 audit[6064]: CRED_ACQ pid=6064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:39.597332 sshd-session[6064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:39.602394 systemd-logind[2105]: New session 25 of user core. Jan 14 00:06:39.611262 kernel: audit: type=1101 audit(1768349199.577:895): pid=6064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:39.611327 kernel: audit: type=1103 audit(1768349199.593:896): pid=6064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:39.623816 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 00:06:39.593000 audit[6064]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc6ec0fd0 a2=3 a3=0 items=0 ppid=1 pid=6064 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:39.643593 kernel: audit: type=1006 audit(1768349199.593:897): pid=6064 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 14 00:06:39.643678 kernel: audit: type=1300 audit(1768349199.593:897): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc6ec0fd0 a2=3 a3=0 items=0 ppid=1 pid=6064 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:39.593000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:39.624000 audit[6064]: USER_START pid=6064 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:39.670176 kernel: audit: type=1327 audit(1768349199.593:897): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:39.670287 kernel: audit: type=1105 audit(1768349199.624:898): pid=6064 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:39.643000 audit[6070]: CRED_ACQ pid=6070 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:39.686128 kernel: audit: type=1103 audit(1768349199.643:899): pid=6070 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:39.855889 sshd[6070]: Connection closed by 10.200.16.10 port 33706 Jan 14 00:06:39.858355 sshd-session[6064]: pam_unix(sshd:session): session closed for user core Jan 14 00:06:39.858000 audit[6064]: USER_END pid=6064 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:39.862143 systemd[1]: sshd@21-10.200.20.29:22-10.200.16.10:33706.service: Deactivated successfully. Jan 14 00:06:39.866241 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 00:06:39.869021 systemd-logind[2105]: Session 25 logged out. Waiting for processes to exit. Jan 14 00:06:39.873969 systemd-logind[2105]: Removed session 25. Jan 14 00:06:39.858000 audit[6064]: CRED_DISP pid=6064 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:39.902634 kernel: audit: type=1106 audit(1768349199.858:900): pid=6064 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:39.902764 kernel: audit: type=1104 audit(1768349199.858:901): pid=6064 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:39.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.29:22-10.200.16.10:33706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:42.382985 kubelet[3683]: E0114 00:06:42.382315 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-t7bd6" podUID="52b59e1e-92be-4298-96f1-1e43387d21fa" Jan 14 00:06:44.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.29:22-10.200.16.10:39352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:44.928866 systemd[1]: Started sshd@22-10.200.20.29:22-10.200.16.10:39352.service - OpenSSH per-connection server daemon (10.200.16.10:39352). Jan 14 00:06:44.932795 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:06:44.933041 kernel: audit: type=1130 audit(1768349204.928:903): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.29:22-10.200.16.10:39352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:45.331000 audit[6106]: USER_ACCT pid=6106 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:45.351241 sshd[6106]: Accepted publickey for core from 10.200.16.10 port 39352 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:06:45.351149 sshd-session[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:45.349000 audit[6106]: CRED_ACQ pid=6106 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:45.352202 kernel: audit: type=1101 audit(1768349205.331:904): pid=6106 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:45.377780 kernel: audit: type=1103 audit(1768349205.349:905): pid=6106 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:45.377891 kernel: audit: type=1006 audit(1768349205.349:906): pid=6106 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 14 00:06:45.349000 audit[6106]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffea8f52c0 a2=3 a3=0 items=0 ppid=1 pid=6106 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:45.395389 kernel: audit: type=1300 audit(1768349205.349:906): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffea8f52c0 a2=3 a3=0 items=0 ppid=1 pid=6106 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:45.384523 systemd-logind[2105]: New session 26 of user core. Jan 14 00:06:45.395766 kubelet[3683]: E0114 00:06:45.386854 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67f75d98f9-d4x2n" podUID="ef75aa50-7d3a-4b9a-95a7-c344bbb8239a" Jan 14 00:06:45.349000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:45.403190 kernel: audit: type=1327 audit(1768349205.349:906): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:45.404442 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 00:06:45.407000 audit[6106]: USER_START pid=6106 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:45.427000 audit[6111]: CRED_ACQ pid=6111 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:45.442456 kernel: audit: type=1105 audit(1768349205.407:907): pid=6106 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:45.442613 kernel: audit: type=1103 audit(1768349205.427:908): pid=6111 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:45.626848 sshd[6111]: Connection closed by 10.200.16.10 port 39352 Jan 14 00:06:45.627691 sshd-session[6106]: pam_unix(sshd:session): session closed for user core Jan 14 00:06:45.628000 audit[6106]: USER_END pid=6106 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:45.634618 systemd[1]: sshd@22-10.200.20.29:22-10.200.16.10:39352.service: Deactivated successfully. Jan 14 00:06:45.637289 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 00:06:45.638978 systemd-logind[2105]: Session 26 logged out. Waiting for processes to exit. Jan 14 00:06:45.640798 systemd-logind[2105]: Removed session 26. Jan 14 00:06:45.630000 audit[6106]: CRED_DISP pid=6106 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:45.664964 kernel: audit: type=1106 audit(1768349205.628:909): pid=6106 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:45.665061 kernel: audit: type=1104 audit(1768349205.630:910): pid=6106 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:45.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.29:22-10.200.16.10:39352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:48.384670 kubelet[3683]: E0114 00:06:48.384521 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgz55" podUID="f0601279-098f-420b-84a8-b4028d2c0ea2" Jan 14 00:06:50.382721 kubelet[3683]: E0114 00:06:50.382667 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pdm6c" podUID="dd23407e-e7fa-43bd-b827-67d8fab88d3b" Jan 14 00:06:50.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.29:22-10.200.16.10:58278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:50.728588 systemd[1]: Started sshd@23-10.200.20.29:22-10.200.16.10:58278.service - OpenSSH per-connection server daemon (10.200.16.10:58278). Jan 14 00:06:50.732017 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:06:50.732098 kernel: audit: type=1130 audit(1768349210.727:912): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.29:22-10.200.16.10:58278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:51.169000 audit[6125]: USER_ACCT pid=6125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:51.188211 sshd[6125]: Accepted publickey for core from 10.200.16.10 port 58278 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:06:51.189765 sshd-session[6125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:51.187000 audit[6125]: CRED_ACQ pid=6125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:51.204624 kernel: audit: type=1101 audit(1768349211.169:913): pid=6125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:51.204778 kernel: audit: type=1103 audit(1768349211.187:914): pid=6125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:51.207255 kernel: audit: type=1006 audit(1768349211.187:915): pid=6125 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 14 00:06:51.213219 systemd-logind[2105]: New session 27 of user core. Jan 14 00:06:51.187000 audit[6125]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc182df0 a2=3 a3=0 items=0 ppid=1 pid=6125 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:51.233497 kernel: audit: type=1300 audit(1768349211.187:915): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc182df0 a2=3 a3=0 items=0 ppid=1 pid=6125 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:51.187000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:51.240849 kernel: audit: type=1327 audit(1768349211.187:915): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:51.242405 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 00:06:51.246000 audit[6125]: USER_START pid=6125 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:51.267000 audit[6129]: CRED_ACQ pid=6129 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:51.281933 kernel: audit: type=1105 audit(1768349211.246:916): pid=6125 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:51.282029 kernel: audit: type=1103 audit(1768349211.267:917): pid=6129 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:51.495313 sshd[6129]: Connection closed by 10.200.16.10 port 58278 Jan 14 00:06:51.496081 sshd-session[6125]: pam_unix(sshd:session): session closed for user core Jan 14 00:06:51.498000 audit[6125]: USER_END pid=6125 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:51.521538 systemd[1]: sshd@23-10.200.20.29:22-10.200.16.10:58278.service: Deactivated successfully. Jan 14 00:06:51.523639 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 00:06:51.498000 audit[6125]: CRED_DISP pid=6125 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:51.537942 kernel: audit: type=1106 audit(1768349211.498:918): pid=6125 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:51.538025 kernel: audit: type=1104 audit(1768349211.498:919): pid=6125 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:51.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.29:22-10.200.16.10:58278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:51.539946 systemd-logind[2105]: Session 27 logged out. Waiting for processes to exit. Jan 14 00:06:51.544274 systemd-logind[2105]: Removed session 27. Jan 14 00:06:53.381649 kubelet[3683]: E0114 00:06:53.381567 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79cf4bbcf4-jhdlg" podUID="52895973-a9d8-41ff-890b-151c819ea908" Jan 14 00:06:54.385275 kubelet[3683]: E0114 00:06:54.385231 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-vxthm" podUID="23297b6d-ba28-4d3e-a11a-c82aeab97bbe" Jan 14 00:06:56.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.29:22-10.200.16.10:58280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:56.592502 systemd[1]: Started sshd@24-10.200.20.29:22-10.200.16.10:58280.service - OpenSSH per-connection server daemon (10.200.16.10:58280). Jan 14 00:06:56.595635 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:06:56.595696 kernel: audit: type=1130 audit(1768349216.591:921): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.29:22-10.200.16.10:58280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:57.027000 audit[6141]: USER_ACCT pid=6141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:57.029588 sshd[6141]: Accepted publickey for core from 10.200.16.10 port 58280 ssh2: RSA SHA256:fqamrjK8C+dgRK8RXvyTmYHPFjW5aIhCYOYKXOUGuuU Jan 14 00:06:57.042000 audit[6141]: CRED_ACQ pid=6141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:57.045193 sshd-session[6141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:06:57.059259 kernel: audit: type=1101 audit(1768349217.027:922): pid=6141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:57.059350 kernel: audit: type=1103 audit(1768349217.042:923): pid=6141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:57.069269 kernel: audit: type=1006 audit(1768349217.042:924): pid=6141 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 14 00:06:57.042000 audit[6141]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe0473f60 a2=3 a3=0 items=0 ppid=1 pid=6141 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:57.086789 kernel: audit: type=1300 audit(1768349217.042:924): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe0473f60 a2=3 a3=0 items=0 ppid=1 pid=6141 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:06:57.088977 systemd-logind[2105]: New session 28 of user core. Jan 14 00:06:57.042000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:57.096540 kernel: audit: type=1327 audit(1768349217.042:924): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:06:57.102333 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 00:06:57.105000 audit[6141]: USER_START pid=6141 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:57.111000 audit[6145]: CRED_ACQ pid=6145 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:57.139924 kernel: audit: type=1105 audit(1768349217.105:925): pid=6141 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:57.140055 kernel: audit: type=1103 audit(1768349217.111:926): pid=6145 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:57.313215 sshd[6145]: Connection closed by 10.200.16.10 port 58280 Jan 14 00:06:57.314895 sshd-session[6141]: pam_unix(sshd:session): session closed for user core Jan 14 00:06:57.316000 audit[6141]: USER_END pid=6141 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:57.320643 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 00:06:57.322830 systemd[1]: sshd@24-10.200.20.29:22-10.200.16.10:58280.service: Deactivated successfully. Jan 14 00:06:57.316000 audit[6141]: CRED_DISP pid=6141 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:57.338060 systemd-logind[2105]: Session 28 logged out. Waiting for processes to exit. Jan 14 00:06:57.352026 kernel: audit: type=1106 audit(1768349217.316:927): pid=6141 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:57.352126 kernel: audit: type=1104 audit(1768349217.316:928): pid=6141 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 00:06:57.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.29:22-10.200.16.10:58280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:06:57.353046 systemd-logind[2105]: Removed session 28. Jan 14 00:06:57.381975 kubelet[3683]: E0114 00:06:57.380973 3683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766dfc88bb-t7bd6" podUID="52b59e1e-92be-4298-96f1-1e43387d21fa"