Dec 16 12:13:20.385935 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Dec 16 12:13:20.385955 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Dec 16 00:05:24 -00 2025 Dec 16 12:13:20.385961 kernel: KASLR enabled Dec 16 12:13:20.385965 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 16 12:13:20.385970 kernel: printk: legacy bootconsole [pl11] enabled Dec 16 12:13:20.385974 kernel: efi: EFI v2.7 by EDK II Dec 16 12:13:20.385980 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db7d598 Dec 16 12:13:20.385984 kernel: random: crng init done Dec 16 12:13:20.385988 kernel: secureboot: Secure boot disabled Dec 16 12:13:20.385992 kernel: ACPI: Early table checksum verification disabled Dec 16 12:13:20.385997 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Dec 16 12:13:20.386001 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:13:20.386005 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:13:20.386010 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 16 12:13:20.386015 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:13:20.386020 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:13:20.386025 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:13:20.386030 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:13:20.386034 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:13:20.386039 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:13:20.386043 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 16 12:13:20.386048 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:13:20.386052 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 16 12:13:20.386056 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:13:20.386061 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 16 12:13:20.386065 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Dec 16 12:13:20.386069 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Dec 16 12:13:20.386075 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 16 12:13:20.386079 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 16 12:13:20.386084 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 16 12:13:20.386088 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 16 12:13:20.386093 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 16 12:13:20.386097 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 16 12:13:20.386101 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 16 12:13:20.386106 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 16 12:13:20.386110 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 16 12:13:20.386115 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Dec 16 12:13:20.386119 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Dec 16 12:13:20.386125 kernel: Zone ranges: Dec 16 12:13:20.386129 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 16 12:13:20.386135 kernel: DMA32 empty Dec 16 12:13:20.386140 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 12:13:20.386145 kernel: Device empty Dec 16 12:13:20.386151 kernel: Movable zone start for each node Dec 16 12:13:20.386155 kernel: Early memory node ranges Dec 16 12:13:20.386160 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 16 12:13:20.386165 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Dec 16 12:13:20.386169 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Dec 16 12:13:20.386174 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Dec 16 12:13:20.386179 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Dec 16 12:13:20.386183 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Dec 16 12:13:20.386188 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 12:13:20.386193 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 16 12:13:20.386198 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 16 12:13:20.386203 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Dec 16 12:13:20.386207 kernel: psci: probing for conduit method from ACPI. Dec 16 12:13:20.386212 kernel: psci: PSCIv1.3 detected in firmware. Dec 16 12:13:20.386217 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:13:20.386221 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 16 12:13:20.386226 kernel: psci: SMC Calling Convention v1.4 Dec 16 12:13:20.386230 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 16 12:13:20.386235 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 16 12:13:20.386240 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:13:20.386245 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:13:20.386250 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 16 12:13:20.386255 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:13:20.386260 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Dec 16 12:13:20.386264 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:13:20.386269 kernel: CPU features: detected: Spectre-v4 Dec 16 12:13:20.386274 kernel: CPU features: detected: Spectre-BHB Dec 16 12:13:20.386278 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:13:20.386283 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:13:20.386288 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Dec 16 12:13:20.386292 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:13:20.386298 kernel: alternatives: applying boot alternatives Dec 16 12:13:20.386304 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=756b815c2fd7ac2947efceb2a88878d1ea9723ec85037c2b4d1a09bd798bb749 Dec 16 12:13:20.386309 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:13:20.386313 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:13:20.386318 kernel: Fallback order for Node 0: 0 Dec 16 12:13:20.386323 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Dec 16 12:13:20.386327 kernel: Policy zone: Normal Dec 16 12:13:20.386332 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:13:20.386337 kernel: software IO TLB: area num 2. Dec 16 12:13:20.386341 kernel: software IO TLB: mapped [mem 0x0000000037370000-0x000000003b370000] (64MB) Dec 16 12:13:20.386346 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 12:13:20.386352 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:13:20.386357 kernel: rcu: RCU event tracing is enabled. Dec 16 12:13:20.386362 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 12:13:20.386367 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:13:20.386372 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:13:20.386376 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:13:20.386381 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 12:13:20.386386 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:13:20.386390 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:13:20.386395 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:13:20.386400 kernel: GICv3: 960 SPIs implemented Dec 16 12:13:20.386405 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:13:20.386410 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:13:20.386414 kernel: GICv3: GICv3 features: 16 PPIs, RSS Dec 16 12:13:20.386419 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Dec 16 12:13:20.386424 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 16 12:13:20.386428 kernel: ITS: No ITS available, not enabling LPIs Dec 16 12:13:20.386433 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:13:20.386438 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Dec 16 12:13:20.386443 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 12:13:20.386447 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Dec 16 12:13:20.386452 kernel: Console: colour dummy device 80x25 Dec 16 12:13:20.386458 kernel: printk: legacy console [tty1] enabled Dec 16 12:13:20.386463 kernel: ACPI: Core revision 20240827 Dec 16 12:13:20.386468 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Dec 16 12:13:20.386473 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:13:20.386478 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:13:20.386483 kernel: landlock: Up and running. Dec 16 12:13:20.386488 kernel: SELinux: Initializing. Dec 16 12:13:20.386494 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:13:20.386499 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:13:20.386504 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Dec 16 12:13:20.386509 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Dec 16 12:13:20.386518 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 16 12:13:20.386524 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:13:20.386529 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:13:20.386534 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:13:20.386539 kernel: Remapping and enabling EFI services. Dec 16 12:13:20.386545 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:13:20.386550 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:13:20.386555 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 16 12:13:20.386561 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Dec 16 12:13:20.386567 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 12:13:20.386572 kernel: SMP: Total of 2 processors activated. Dec 16 12:13:20.386577 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:13:20.386582 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:13:20.386587 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 16 12:13:20.386592 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:13:20.386597 kernel: CPU features: detected: Common not Private translations Dec 16 12:13:20.386603 kernel: CPU features: detected: CRC32 instructions Dec 16 12:13:20.386609 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Dec 16 12:13:20.386614 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:13:20.386619 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:13:20.386624 kernel: CPU features: detected: Privileged Access Never Dec 16 12:13:20.386629 kernel: CPU features: detected: Speculation barrier (SB) Dec 16 12:13:20.386634 kernel: CPU features: detected: TLB range maintenance instructions Dec 16 12:13:20.386640 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:13:20.386646 kernel: CPU features: detected: Scalable Vector Extension Dec 16 12:13:20.386651 kernel: alternatives: applying system-wide alternatives Dec 16 12:13:20.386656 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Dec 16 12:13:20.386661 kernel: SVE: maximum available vector length 16 bytes per vector Dec 16 12:13:20.386666 kernel: SVE: default vector length 16 bytes per vector Dec 16 12:13:20.386672 kernel: Memory: 3979900K/4194160K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12480K init, 1038K bss, 193072K reserved, 16384K cma-reserved) Dec 16 12:13:20.386678 kernel: devtmpfs: initialized Dec 16 12:13:20.386683 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:13:20.386688 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 12:13:20.386694 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:13:20.386699 kernel: 0 pages in range for non-PLT usage Dec 16 12:13:20.386704 kernel: 515168 pages in range for PLT usage Dec 16 12:13:20.386709 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:13:20.386715 kernel: SMBIOS 3.1.0 present. Dec 16 12:13:20.386720 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Dec 16 12:13:20.386725 kernel: DMI: Memory slots populated: 2/2 Dec 16 12:13:20.386730 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:13:20.386735 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:13:20.386741 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:13:20.386746 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:13:20.386766 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:13:20.386772 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Dec 16 12:13:20.386777 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:13:20.386782 kernel: cpuidle: using governor menu Dec 16 12:13:20.386787 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:13:20.386792 kernel: ASID allocator initialised with 32768 entries Dec 16 12:13:20.386797 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:13:20.386803 kernel: Serial: AMBA PL011 UART driver Dec 16 12:13:20.386809 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:13:20.386814 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:13:20.386819 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:13:20.386824 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:13:20.386829 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:13:20.386834 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:13:20.386840 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:13:20.386845 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:13:20.386851 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:13:20.386856 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:13:20.386861 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:13:20.386866 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:13:20.386871 kernel: ACPI: Interpreter enabled Dec 16 12:13:20.386876 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:13:20.386883 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:13:20.386888 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:13:20.386893 kernel: printk: legacy bootconsole [pl11] disabled Dec 16 12:13:20.386898 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 16 12:13:20.386903 kernel: ACPI: CPU0 has been hot-added Dec 16 12:13:20.386908 kernel: ACPI: CPU1 has been hot-added Dec 16 12:13:20.386913 kernel: iommu: Default domain type: Translated Dec 16 12:13:20.386920 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:13:20.386925 kernel: efivars: Registered efivars operations Dec 16 12:13:20.386930 kernel: vgaarb: loaded Dec 16 12:13:20.386935 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:13:20.386940 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:13:20.386946 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:13:20.386951 kernel: pnp: PnP ACPI init Dec 16 12:13:20.386957 kernel: pnp: PnP ACPI: found 0 devices Dec 16 12:13:20.386962 kernel: NET: Registered PF_INET protocol family Dec 16 12:13:20.386967 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:13:20.386972 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:13:20.386978 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:13:20.386983 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:13:20.386988 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:13:20.386994 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:13:20.386999 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:13:20.387005 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:13:20.387010 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:13:20.387015 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:13:20.387020 kernel: kvm [1]: HYP mode not available Dec 16 12:13:20.387025 kernel: Initialise system trusted keyrings Dec 16 12:13:20.387030 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:13:20.387037 kernel: Key type asymmetric registered Dec 16 12:13:20.387042 kernel: Asymmetric key parser 'x509' registered Dec 16 12:13:20.387047 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:13:20.387052 kernel: io scheduler mq-deadline registered Dec 16 12:13:20.387057 kernel: io scheduler kyber registered Dec 16 12:13:20.387062 kernel: io scheduler bfq registered Dec 16 12:13:20.387067 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:13:20.387074 kernel: thunder_xcv, ver 1.0 Dec 16 12:13:20.387079 kernel: thunder_bgx, ver 1.0 Dec 16 12:13:20.387084 kernel: nicpf, ver 1.0 Dec 16 12:13:20.387089 kernel: nicvf, ver 1.0 Dec 16 12:13:20.387232 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:13:20.387301 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:13:17 UTC (1765887197) Dec 16 12:13:20.387310 kernel: efifb: probing for efifb Dec 16 12:13:20.387315 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 16 12:13:20.387320 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 16 12:13:20.387326 kernel: efifb: scrolling: redraw Dec 16 12:13:20.387331 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 12:13:20.387336 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 12:13:20.387341 kernel: fb0: EFI VGA frame buffer device Dec 16 12:13:20.387348 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 16 12:13:20.387353 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:13:20.387359 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:13:20.387364 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:13:20.387369 kernel: watchdog: NMI not fully supported Dec 16 12:13:20.387374 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:13:20.387379 kernel: Segment Routing with IPv6 Dec 16 12:13:20.387385 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:13:20.387391 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:13:20.387396 kernel: Key type dns_resolver registered Dec 16 12:13:20.387401 kernel: registered taskstats version 1 Dec 16 12:13:20.387406 kernel: Loading compiled-in X.509 certificates Dec 16 12:13:20.387411 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 545838337a91b65b763486e536766b3eec3ef99d' Dec 16 12:13:20.387417 kernel: Demotion targets for Node 0: null Dec 16 12:13:20.387423 kernel: Key type .fscrypt registered Dec 16 12:13:20.387428 kernel: Key type fscrypt-provisioning registered Dec 16 12:13:20.387433 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:13:20.387438 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:13:20.387443 kernel: ima: No architecture policies found Dec 16 12:13:20.387449 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:13:20.387454 kernel: clk: Disabling unused clocks Dec 16 12:13:20.387459 kernel: PM: genpd: Disabling unused power domains Dec 16 12:13:20.387465 kernel: Freeing unused kernel memory: 12480K Dec 16 12:13:20.387470 kernel: Run /init as init process Dec 16 12:13:20.387475 kernel: with arguments: Dec 16 12:13:20.387480 kernel: /init Dec 16 12:13:20.387485 kernel: with environment: Dec 16 12:13:20.387490 kernel: HOME=/ Dec 16 12:13:20.387495 kernel: TERM=linux Dec 16 12:13:20.387501 kernel: hv_vmbus: Vmbus version:5.3 Dec 16 12:13:20.387506 kernel: SCSI subsystem initialized Dec 16 12:13:20.387511 kernel: hv_vmbus: registering driver hid_hyperv Dec 16 12:13:20.387517 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 16 12:13:20.387603 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 16 12:13:20.387611 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 16 12:13:20.387618 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 16 12:13:20.387623 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 12:13:20.387628 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 12:13:20.387634 kernel: PTP clock support registered Dec 16 12:13:20.387639 kernel: hv_utils: Registering HyperV Utility Driver Dec 16 12:13:20.387644 kernel: hv_vmbus: registering driver hv_utils Dec 16 12:13:20.387649 kernel: hv_utils: Heartbeat IC version 3.0 Dec 16 12:13:20.387656 kernel: hv_utils: Shutdown IC version 3.2 Dec 16 12:13:20.387661 kernel: hv_utils: TimeSync IC version 4.0 Dec 16 12:13:20.387666 kernel: hv_vmbus: registering driver hv_storvsc Dec 16 12:13:20.387775 kernel: scsi host0: storvsc_host_t Dec 16 12:13:20.387855 kernel: scsi host1: storvsc_host_t Dec 16 12:13:20.387943 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 16 12:13:20.388027 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 16 12:13:20.388101 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 16 12:13:20.388175 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Dec 16 12:13:20.388248 kernel: sd 1:0:0:0: [sda] Write Protect is off Dec 16 12:13:20.388321 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 16 12:13:20.388394 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 16 12:13:20.388475 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#189 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:13:20.388543 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:13:20.388550 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 12:13:20.388622 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Dec 16 12:13:20.388695 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Dec 16 12:13:20.388703 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 12:13:20.388782 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Dec 16 12:13:20.388788 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:13:20.388794 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:13:20.388799 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:13:20.388805 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:13:20.388810 kernel: raid6: neonx8 gen() 18556 MB/s Dec 16 12:13:20.388816 kernel: raid6: neonx4 gen() 18563 MB/s Dec 16 12:13:20.388822 kernel: raid6: neonx2 gen() 17091 MB/s Dec 16 12:13:20.388827 kernel: raid6: neonx1 gen() 15121 MB/s Dec 16 12:13:20.388832 kernel: raid6: int64x8 gen() 10552 MB/s Dec 16 12:13:20.388837 kernel: raid6: int64x4 gen() 10614 MB/s Dec 16 12:13:20.388842 kernel: raid6: int64x2 gen() 8970 MB/s Dec 16 12:13:20.388848 kernel: raid6: int64x1 gen() 7004 MB/s Dec 16 12:13:20.388853 kernel: raid6: using algorithm neonx4 gen() 18563 MB/s Dec 16 12:13:20.388859 kernel: raid6: .... xor() 15134 MB/s, rmw enabled Dec 16 12:13:20.388865 kernel: raid6: using neon recovery algorithm Dec 16 12:13:20.388870 kernel: xor: measuring software checksum speed Dec 16 12:13:20.388875 kernel: 8regs : 28615 MB/sec Dec 16 12:13:20.388880 kernel: 32regs : 28745 MB/sec Dec 16 12:13:20.388885 kernel: arm64_neon : 37340 MB/sec Dec 16 12:13:20.388891 kernel: xor: using function: arm64_neon (37340 MB/sec) Dec 16 12:13:20.388897 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:13:20.388902 kernel: BTRFS: device fsid d00a2bc5-1c68-4957-aa37-d070193fcf05 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (388) Dec 16 12:13:20.388908 kernel: BTRFS info (device dm-0): first mount of filesystem d00a2bc5-1c68-4957-aa37-d070193fcf05 Dec 16 12:13:20.388913 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:13:20.388918 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:13:20.388924 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:13:20.388929 kernel: loop: module loaded Dec 16 12:13:20.388935 kernel: loop0: detected capacity change from 0 to 91832 Dec 16 12:13:20.388940 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:13:20.388947 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:13:20.388954 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:13:20.388961 systemd[1]: Detected virtualization microsoft. Dec 16 12:13:20.388966 systemd[1]: Detected architecture arm64. Dec 16 12:13:20.388973 systemd[1]: Running in initrd. Dec 16 12:13:20.388978 systemd[1]: No hostname configured, using default hostname. Dec 16 12:13:20.388984 systemd[1]: Hostname set to . Dec 16 12:13:20.388990 systemd[1]: Initializing machine ID from random generator. Dec 16 12:13:20.388995 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:13:20.389001 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:13:20.389007 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:13:20.389013 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:13:20.389019 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:13:20.389025 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:13:20.389031 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:13:20.389037 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:13:20.389044 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:13:20.389050 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:13:20.389055 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:13:20.389061 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:13:20.389066 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:13:20.389072 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:13:20.389079 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:13:20.389084 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:13:20.389090 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:13:20.389096 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:13:20.389101 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:13:20.389107 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:13:20.389113 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:13:20.389124 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:13:20.389131 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:13:20.389137 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:13:20.389143 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:13:20.389149 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:13:20.389156 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:13:20.389162 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:13:20.389168 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:13:20.389174 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:13:20.389180 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:13:20.389186 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:13:20.389193 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:13:20.389199 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:13:20.389205 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:13:20.389211 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:13:20.389232 systemd-journald[525]: Collecting audit messages is enabled. Dec 16 12:13:20.389247 kernel: audit: type=1130 audit(1765887200.385:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:20.389254 systemd-journald[525]: Journal started Dec 16 12:13:20.389269 systemd-journald[525]: Runtime Journal (/run/log/journal/4e881762b9dd48f4b19c4d1f05a3513b) is 8M, max 78.3M, 70.3M free. Dec 16 12:13:20.407055 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:13:20.407119 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:13:20.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:20.432114 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:13:20.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:20.449777 kernel: audit: type=1130 audit(1765887200.431:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:20.449826 kernel: Bridge firewalling registered Dec 16 12:13:20.453801 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:13:20.459615 systemd-modules-load[528]: Inserted module 'br_netfilter' Dec 16 12:13:20.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:20.461273 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:13:20.499363 kernel: audit: type=1130 audit(1765887200.478:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:20.479422 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:13:20.530204 systemd-tmpfiles[538]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:13:20.539269 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:13:20.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:20.567770 kernel: audit: type=1130 audit(1765887200.544:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:20.565590 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:13:20.602562 kernel: audit: type=1130 audit(1765887200.572:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:20.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:20.580790 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:13:20.631533 kernel: audit: type=1130 audit(1765887200.602:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:20.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:20.609359 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:13:20.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:20.634041 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:13:20.663601 kernel: audit: type=1130 audit(1765887200.631:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:20.668000 audit: BPF prog-id=6 op=LOAD Dec 16 12:13:20.669699 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:13:20.686989 kernel: audit: type=1334 audit(1765887200.668:9): prog-id=6 op=LOAD Dec 16 12:13:20.686928 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:13:20.717590 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:13:20.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:20.747768 kernel: audit: type=1130 audit(1765887200.723:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:20.749881 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:13:20.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:20.805974 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:13:20.833840 systemd-resolved[549]: Positive Trust Anchors: Dec 16 12:13:20.833852 systemd-resolved[549]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:13:20.833854 systemd-resolved[549]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:13:20.833874 systemd-resolved[549]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:13:20.890650 dracut-cmdline[564]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=756b815c2fd7ac2947efceb2a88878d1ea9723ec85037c2b4d1a09bd798bb749 Dec 16 12:13:20.894502 systemd-resolved[549]: Defaulting to hostname 'linux'. Dec 16 12:13:20.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:20.921646 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:13:20.927695 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:13:21.040786 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:13:21.085788 kernel: iscsi: registered transport (tcp) Dec 16 12:13:21.119241 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:13:21.119268 kernel: QLogic iSCSI HBA Driver Dec 16 12:13:21.167678 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:13:21.185798 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:13:21.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:21.194499 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:13:21.249017 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:13:21.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:21.257767 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:13:21.277578 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:13:21.305159 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:13:21.333697 kernel: kauditd_printk_skb: 4 callbacks suppressed Dec 16 12:13:21.333725 kernel: audit: type=1130 audit(1765887201.310:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:21.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:21.341041 kernel: audit: type=1334 audit(1765887201.334:16): prog-id=7 op=LOAD Dec 16 12:13:21.334000 audit: BPF prog-id=7 op=LOAD Dec 16 12:13:21.341364 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:13:21.361647 kernel: audit: type=1334 audit(1765887201.334:17): prog-id=8 op=LOAD Dec 16 12:13:21.334000 audit: BPF prog-id=8 op=LOAD Dec 16 12:13:21.440067 systemd-udevd[781]: Using default interface naming scheme 'v257'. Dec 16 12:13:21.441489 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:13:21.461440 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:13:21.505272 kernel: audit: type=1130 audit(1765887201.460:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:21.505299 kernel: audit: type=1130 audit(1765887201.483:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:21.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:21.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:21.506947 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:13:21.522888 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:13:21.542850 kernel: audit: type=1334 audit(1765887201.521:20): prog-id=9 op=LOAD Dec 16 12:13:21.521000 audit: BPF prog-id=9 op=LOAD Dec 16 12:13:21.549085 dracut-pre-trigger[902]: rd.md=0: removing MD RAID activation Dec 16 12:13:21.579809 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:13:21.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:21.582100 systemd-networkd[903]: lo: Link UP Dec 16 12:13:21.630268 kernel: audit: type=1130 audit(1765887201.590:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:21.630305 kernel: audit: type=1130 audit(1765887201.611:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:21.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:21.582103 systemd-networkd[903]: lo: Gained carrier Dec 16 12:13:21.591028 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:13:21.612628 systemd[1]: Reached target network.target - Network. Dec 16 12:13:21.640570 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:13:21.698425 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:13:21.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:21.731054 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:13:21.749625 kernel: audit: type=1130 audit(1765887201.712:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:21.836989 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:13:21.846701 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:13:21.837131 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:13:21.880215 kernel: hv_vmbus: registering driver hv_netvsc Dec 16 12:13:21.880245 kernel: audit: type=1131 audit(1765887201.858:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:21.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:21.859964 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:13:21.887995 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:13:21.908295 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:13:21.908413 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:13:21.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:21.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:21.920908 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:13:21.953796 kernel: hv_netvsc 0022487b-7438-0022-487b-74380022487b eth0: VF slot 1 added Dec 16 12:13:21.958485 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:13:21.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:21.971623 systemd-networkd[903]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:13:21.971635 systemd-networkd[903]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:13:22.005619 kernel: hv_vmbus: registering driver hv_pci Dec 16 12:13:22.005645 kernel: hv_pci a25a8ebc-2e2f-428e-b656-19d186910180: PCI VMBus probing: Using version 0x10004 Dec 16 12:13:21.974270 systemd-networkd[903]: eth0: Link UP Dec 16 12:13:22.017139 kernel: hv_pci a25a8ebc-2e2f-428e-b656-19d186910180: PCI host bridge to bus 2e2f:00 Dec 16 12:13:22.017327 kernel: pci_bus 2e2f:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 16 12:13:21.974344 systemd-networkd[903]: eth0: Gained carrier Dec 16 12:13:22.042650 kernel: pci_bus 2e2f:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 12:13:22.042856 kernel: pci 2e2f:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Dec 16 12:13:22.042908 kernel: pci 2e2f:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 16 12:13:21.974356 systemd-networkd[903]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:13:22.060882 kernel: pci 2e2f:00:02.0: enabling Extended Tags Dec 16 12:13:22.061897 systemd-networkd[903]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:13:22.081807 kernel: pci 2e2f:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2e2f:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Dec 16 12:13:22.094538 kernel: pci_bus 2e2f:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 12:13:22.094719 kernel: pci 2e2f:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Dec 16 12:13:22.269710 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 16 12:13:22.289910 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:13:22.400145 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 12:13:22.429246 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 16 12:13:22.449211 kernel: mlx5_core 2e2f:00:02.0: enabling device (0000 -> 0002) Dec 16 12:13:22.554570 kernel: mlx5_core 2e2f:00:02.0: PTM is not supported by PCIe Dec 16 12:13:22.554818 kernel: mlx5_core 2e2f:00:02.0: firmware version: 16.30.5006 Dec 16 12:13:22.558112 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 16 12:13:22.761425 kernel: hv_netvsc 0022487b-7438-0022-487b-74380022487b eth0: VF registering: eth1 Dec 16 12:13:22.761659 kernel: mlx5_core 2e2f:00:02.0 eth1: joined to eth0 Dec 16 12:13:22.769647 kernel: mlx5_core 2e2f:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 16 12:13:22.788792 kernel: mlx5_core 2e2f:00:02.0 enP11823s1: renamed from eth1 Dec 16 12:13:22.789700 systemd-networkd[903]: eth1: Interface name change detected, renamed to enP11823s1. Dec 16 12:13:22.833836 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:13:22.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:22.846330 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:13:22.853943 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:13:22.868185 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:13:22.881915 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:13:22.916584 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:13:22.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:22.958774 kernel: mlx5_core 2e2f:00:02.0 enP11823s1: Link up Dec 16 12:13:22.993761 kernel: hv_netvsc 0022487b-7438-0022-487b-74380022487b eth0: Data path switched to VF: enP11823s1 Dec 16 12:13:22.993935 systemd-networkd[903]: enP11823s1: Link UP Dec 16 12:13:22.994187 systemd-networkd[903]: enP11823s1: Gained carrier Dec 16 12:13:23.259191 systemd-networkd[903]: eth0: Gained IPv6LL Dec 16 12:13:23.729350 disk-uuid[1045]: Warning: The kernel is still using the old partition table. Dec 16 12:13:23.729350 disk-uuid[1045]: The new table will be used at the next reboot or after you Dec 16 12:13:23.729350 disk-uuid[1045]: run partprobe(8) or kpartx(8) Dec 16 12:13:23.729350 disk-uuid[1045]: The operation has completed successfully. Dec 16 12:13:23.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.737324 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:13:23.737437 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:13:23.751519 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:13:23.821778 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1176) Dec 16 12:13:23.833616 kernel: BTRFS info (device sda6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:13:23.833645 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:13:23.860413 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:13:23.860461 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:13:23.871789 kernel: BTRFS info (device sda6): last unmount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:13:23.872177 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:13:23.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.879141 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:13:24.904154 ignition[1195]: Ignition 2.24.0 Dec 16 12:13:24.904171 ignition[1195]: Stage: fetch-offline Dec 16 12:13:24.909098 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:13:24.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:24.904286 ignition[1195]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:13:24.919494 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 12:13:24.904297 ignition[1195]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:13:24.904376 ignition[1195]: parsed url from cmdline: "" Dec 16 12:13:24.904379 ignition[1195]: no config URL provided Dec 16 12:13:24.904437 ignition[1195]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:13:24.904444 ignition[1195]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:13:24.904455 ignition[1195]: failed to fetch config: resource requires networking Dec 16 12:13:24.904618 ignition[1195]: Ignition finished successfully Dec 16 12:13:24.950459 ignition[1203]: Ignition 2.24.0 Dec 16 12:13:24.950464 ignition[1203]: Stage: fetch Dec 16 12:13:24.950663 ignition[1203]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:13:24.950670 ignition[1203]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:13:24.950730 ignition[1203]: parsed url from cmdline: "" Dec 16 12:13:24.950738 ignition[1203]: no config URL provided Dec 16 12:13:24.950741 ignition[1203]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:13:24.950746 ignition[1203]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:13:24.950767 ignition[1203]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 16 12:13:25.038617 ignition[1203]: GET result: OK Dec 16 12:13:25.038678 ignition[1203]: config has been read from IMDS userdata Dec 16 12:13:25.038692 ignition[1203]: parsing config with SHA512: 25538a953280b75a839b0412e0a9da7aca684454eec0259eb2167e80cb578683abf1784b8fc89b975f78ccbcd6ecde9997dde23af186fb97eecfd7d9b451b740 Dec 16 12:13:25.043317 unknown[1203]: fetched base config from "system" Dec 16 12:13:25.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:25.043534 ignition[1203]: fetch: fetch complete Dec 16 12:13:25.043322 unknown[1203]: fetched base config from "system" Dec 16 12:13:25.043538 ignition[1203]: fetch: fetch passed Dec 16 12:13:25.043326 unknown[1203]: fetched user config from "azure" Dec 16 12:13:25.043579 ignition[1203]: Ignition finished successfully Dec 16 12:13:25.047368 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 12:13:25.055092 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:13:25.095109 ignition[1210]: Ignition 2.24.0 Dec 16 12:13:25.100481 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:13:25.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:25.095114 ignition[1210]: Stage: kargs Dec 16 12:13:25.111867 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:13:25.095361 ignition[1210]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:13:25.095369 ignition[1210]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:13:25.096066 ignition[1210]: kargs: kargs passed Dec 16 12:13:25.096114 ignition[1210]: Ignition finished successfully Dec 16 12:13:25.139412 ignition[1216]: Ignition 2.24.0 Dec 16 12:13:25.144994 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:13:25.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:25.139418 ignition[1216]: Stage: disks Dec 16 12:13:25.153007 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:13:25.139669 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:13:25.163591 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:13:25.139676 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:13:25.173742 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:13:25.140424 ignition[1216]: disks: disks passed Dec 16 12:13:25.184520 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:13:25.140476 ignition[1216]: Ignition finished successfully Dec 16 12:13:25.194874 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:13:25.207569 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:13:25.330564 systemd-fsck[1224]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Dec 16 12:13:25.340529 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:13:25.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:25.348886 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:13:25.642778 kernel: EXT4-fs (sda9): mounted filesystem 0e69f709-36a9-4e15-b0c9-c7e150185653 r/w with ordered data mode. Quota mode: none. Dec 16 12:13:25.642876 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:13:25.647651 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:13:25.688135 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:13:25.694861 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:13:25.709022 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 12:13:25.720457 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:13:25.720514 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:13:25.738033 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:13:25.747669 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:13:25.773110 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1238) Dec 16 12:13:25.773149 kernel: BTRFS info (device sda6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:13:25.783667 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:13:25.794019 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:13:25.794069 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:13:25.795444 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:13:26.342436 coreos-metadata[1240]: Dec 16 12:13:26.342 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 12:13:26.732652 coreos-metadata[1240]: Dec 16 12:13:26.732 INFO Fetch successful Dec 16 12:13:26.732652 coreos-metadata[1240]: Dec 16 12:13:26.732 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 16 12:13:26.758892 coreos-metadata[1240]: Dec 16 12:13:26.758 INFO Fetch successful Dec 16 12:13:26.764550 coreos-metadata[1240]: Dec 16 12:13:26.758 INFO wrote hostname ci-4547.0.0-a-623de6ebc0 to /sysroot/etc/hostname Dec 16 12:13:26.772207 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:13:26.805191 kernel: kauditd_printk_skb: 13 callbacks suppressed Dec 16 12:13:26.805220 kernel: audit: type=1130 audit(1765887206.780:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:26.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:28.079094 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:13:28.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:28.087006 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:13:28.114350 kernel: audit: type=1130 audit(1765887208.084:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:28.115488 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:13:28.136932 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:13:28.148049 kernel: BTRFS info (device sda6): last unmount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:13:28.166989 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:13:28.172976 ignition[1343]: INFO : Ignition 2.24.0 Dec 16 12:13:28.172976 ignition[1343]: INFO : Stage: mount Dec 16 12:13:28.172976 ignition[1343]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:13:28.172976 ignition[1343]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:13:28.172976 ignition[1343]: INFO : mount: mount passed Dec 16 12:13:28.172976 ignition[1343]: INFO : Ignition finished successfully Dec 16 12:13:28.244627 kernel: audit: type=1130 audit(1765887208.180:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:28.244652 kernel: audit: type=1130 audit(1765887208.202:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:28.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:28.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:28.181581 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:13:28.204813 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:13:28.252883 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:13:28.285778 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1353) Dec 16 12:13:28.298823 kernel: BTRFS info (device sda6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:13:28.298870 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:13:28.309831 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:13:28.309856 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:13:28.311394 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:13:28.345793 ignition[1371]: INFO : Ignition 2.24.0 Dec 16 12:13:28.345793 ignition[1371]: INFO : Stage: files Dec 16 12:13:28.345793 ignition[1371]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:13:28.345793 ignition[1371]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:13:28.363729 ignition[1371]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:13:28.370775 ignition[1371]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:13:28.370775 ignition[1371]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:13:28.450344 ignition[1371]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:13:28.457794 ignition[1371]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:13:28.457794 ignition[1371]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:13:28.450826 unknown[1371]: wrote ssh authorized keys file for user: core Dec 16 12:13:28.600029 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:13:28.600029 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 12:13:28.637018 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:13:28.696115 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:13:28.706364 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:13:28.706364 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:13:28.706364 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:13:28.706364 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:13:28.706364 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:13:28.706364 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:13:28.706364 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:13:28.706364 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:13:28.706364 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:13:28.706364 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:13:28.706364 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:13:28.706364 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:13:28.706364 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:13:28.706364 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Dec 16 12:13:29.255034 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:13:29.439004 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:13:29.439004 ignition[1371]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:13:29.464916 ignition[1371]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:13:29.475573 ignition[1371]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:13:29.475573 ignition[1371]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:13:29.475573 ignition[1371]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:13:29.475573 ignition[1371]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:13:29.475573 ignition[1371]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:13:29.475573 ignition[1371]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:13:29.475573 ignition[1371]: INFO : files: files passed Dec 16 12:13:29.475573 ignition[1371]: INFO : Ignition finished successfully Dec 16 12:13:29.563560 kernel: audit: type=1130 audit(1765887209.490:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:29.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:29.477075 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:13:29.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:29.491395 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:13:29.612384 kernel: audit: type=1130 audit(1765887209.569:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:29.612410 kernel: audit: type=1131 audit(1765887209.569:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:29.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:29.530408 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:13:29.549083 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:13:29.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:29.646789 initrd-setup-root-after-ignition[1401]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:13:29.646789 initrd-setup-root-after-ignition[1401]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:13:29.668800 kernel: audit: type=1130 audit(1765887209.625:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:29.559891 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:13:29.676034 initrd-setup-root-after-ignition[1405]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:13:29.613965 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:13:29.649195 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:13:29.669949 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:13:29.735980 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:13:29.737790 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:13:29.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:29.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:29.747858 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:13:29.790688 kernel: audit: type=1130 audit(1765887209.747:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:29.790713 kernel: audit: type=1131 audit(1765887209.747:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:29.787025 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:13:29.796320 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:13:29.797908 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:13:29.835065 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:13:29.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:29.842529 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:13:29.877827 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:13:29.877999 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:13:29.889585 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:13:29.901054 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:13:29.911598 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:13:29.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:29.911733 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:13:29.926885 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:13:29.932696 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:13:29.943035 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:13:29.952503 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:13:29.961664 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:13:29.971270 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:13:29.981685 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:13:29.992324 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:13:30.004152 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:13:30.013176 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:13:30.024813 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:13:30.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.033531 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:13:30.033668 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:13:30.045856 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:13:30.050975 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:13:30.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.060724 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:13:30.083104 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:13:30.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.089924 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:13:30.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.090045 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:13:30.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.107638 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:13:30.107773 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:13:30.119141 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:13:30.119222 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:13:30.130073 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 12:13:30.130162 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:13:30.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.208992 ignition[1426]: INFO : Ignition 2.24.0 Dec 16 12:13:30.208992 ignition[1426]: INFO : Stage: umount Dec 16 12:13:30.208992 ignition[1426]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:13:30.208992 ignition[1426]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:13:30.208992 ignition[1426]: INFO : umount: umount passed Dec 16 12:13:30.208992 ignition[1426]: INFO : Ignition finished successfully Dec 16 12:13:30.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.140329 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:13:30.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.170821 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:13:30.185915 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:13:30.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.186109 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:13:30.201028 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:13:30.201131 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:13:30.215235 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:13:30.215338 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:13:30.230560 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:13:30.230671 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:13:30.242728 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:13:30.242844 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:13:30.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.253405 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:13:30.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.253455 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:13:30.262475 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 12:13:30.262521 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 12:13:30.272787 systemd[1]: Stopped target network.target - Network. Dec 16 12:13:30.283715 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:13:30.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.283813 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:13:30.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.294963 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:13:30.317223 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:13:30.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.322118 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:13:30.487000 audit: BPF prog-id=6 op=UNLOAD Dec 16 12:13:30.492000 audit: BPF prog-id=9 op=UNLOAD Dec 16 12:13:30.330131 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:13:30.340231 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:13:30.350117 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:13:30.350188 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:13:30.360877 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:13:30.360916 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:13:30.371158 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 12:13:30.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.371176 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:13:30.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.383985 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:13:30.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.384052 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:13:30.395109 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:13:30.395152 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:13:30.406111 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:13:30.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.416172 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:13:30.431841 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:13:30.432462 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:13:30.433790 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:13:30.680876 kernel: hv_netvsc 0022487b-7438-0022-487b-74380022487b eth0: Data path switched from VF: enP11823s1 Dec 16 12:13:30.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.449029 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:13:30.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.449145 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:13:30.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.466730 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:13:30.466895 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:13:30.488232 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:13:30.499322 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:13:30.499377 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:13:30.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.520904 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:13:30.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.536952 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:13:30.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.537057 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:13:30.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.547607 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:13:30.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.547671 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:13:30.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.563214 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:13:30.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.563290 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:13:30.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.574155 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:13:30.608988 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:13:30.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.609135 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:13:30.626489 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:13:30.626527 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:13:30.644822 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:13:30.644872 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:13:30.661955 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:13:30.662033 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:13:30.680675 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:13:30.680742 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:13:30.691089 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:13:30.691149 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:13:30.715018 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:13:30.729830 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:13:30.729915 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:13:30.739055 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:13:30.739104 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:13:30.752543 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 12:13:30.752611 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:13:30.765691 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:13:30.963046 systemd-journald[525]: Received SIGTERM from PID 1 (systemd). Dec 16 12:13:30.765748 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:13:30.777190 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:13:30.777249 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:13:30.789575 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:13:30.789681 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:13:30.798187 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:13:30.798273 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:13:30.807419 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:13:30.807501 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:13:30.818306 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:13:30.827226 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:13:30.827335 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:13:30.839171 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:13:30.870743 systemd[1]: Switching root. Dec 16 12:13:31.034580 systemd-journald[525]: Journal stopped Dec 16 12:13:44.444960 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:13:44.445007 kernel: SELinux: policy capability open_perms=1 Dec 16 12:13:44.445016 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:13:44.445023 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:13:44.445035 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:13:44.445041 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:13:44.445048 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:13:44.445054 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:13:44.445060 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:13:44.445066 kernel: kauditd_printk_skb: 41 callbacks suppressed Dec 16 12:13:44.445074 kernel: audit: type=1403 audit(1765887212.220:89): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 12:13:44.445083 systemd[1]: Successfully loaded SELinux policy in 144.214ms. Dec 16 12:13:44.445090 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.695ms. Dec 16 12:13:44.445098 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:13:44.445107 systemd[1]: Detected virtualization microsoft. Dec 16 12:13:44.445114 systemd[1]: Detected architecture arm64. Dec 16 12:13:44.445121 systemd[1]: Detected first boot. Dec 16 12:13:44.445128 systemd[1]: Hostname set to . Dec 16 12:13:44.445135 systemd[1]: Initializing machine ID from random generator. Dec 16 12:13:44.445143 kernel: audit: type=1334 audit(1765887213.052:90): prog-id=10 op=LOAD Dec 16 12:13:44.445151 kernel: audit: type=1334 audit(1765887213.052:91): prog-id=10 op=UNLOAD Dec 16 12:13:44.445157 kernel: audit: type=1334 audit(1765887213.056:92): prog-id=11 op=LOAD Dec 16 12:13:44.445164 kernel: audit: type=1334 audit(1765887213.056:93): prog-id=11 op=UNLOAD Dec 16 12:13:44.445171 zram_generator::config[1468]: No configuration found. Dec 16 12:13:44.445179 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:13:44.445186 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:13:44.445193 kernel: audit: type=1334 audit(1765887222.775:94): prog-id=12 op=LOAD Dec 16 12:13:44.445199 kernel: audit: type=1334 audit(1765887222.775:95): prog-id=3 op=UNLOAD Dec 16 12:13:44.445206 kernel: audit: type=1334 audit(1765887222.779:96): prog-id=13 op=LOAD Dec 16 12:13:44.445212 kernel: audit: type=1334 audit(1765887222.781:97): prog-id=14 op=LOAD Dec 16 12:13:44.445218 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:13:44.445227 kernel: audit: type=1334 audit(1765887222.781:98): prog-id=4 op=UNLOAD Dec 16 12:13:44.445233 kernel: audit: type=1334 audit(1765887222.781:99): prog-id=5 op=UNLOAD Dec 16 12:13:44.445240 kernel: audit: type=1131 audit(1765887222.786:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.445246 kernel: audit: type=1334 audit(1765887222.822:101): prog-id=12 op=UNLOAD Dec 16 12:13:44.445253 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:13:44.445260 kernel: audit: type=1130 audit(1765887222.837:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.445268 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:13:44.445275 kernel: audit: type=1131 audit(1765887222.837:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.445283 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:13:44.445293 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:13:44.445300 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:13:44.445307 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:13:44.445316 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:13:44.445322 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:13:44.445329 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:13:44.445337 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:13:44.445344 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:13:44.445351 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:13:44.445359 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:13:44.445367 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:13:44.445374 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:13:44.445381 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:13:44.445388 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:13:44.445395 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:13:44.445402 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:13:44.445410 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:13:44.445421 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:13:44.445427 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:13:44.445434 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:13:44.445441 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:13:44.445448 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:13:44.445456 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 12:13:44.445463 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:13:44.445470 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:13:44.445477 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:13:44.445484 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:13:44.445492 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:13:44.445499 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:13:44.445506 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 12:13:44.445513 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:13:44.445520 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 12:13:44.445527 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 12:13:44.445535 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:13:44.445542 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:13:44.445549 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:13:44.445556 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:13:44.445563 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:13:44.445570 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:13:44.445578 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:13:44.445585 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:13:44.445593 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:13:44.445600 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:13:44.445607 systemd[1]: Reached target machines.target - Containers. Dec 16 12:13:44.445614 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:13:44.445621 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:13:44.445629 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:13:44.445636 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:13:44.445643 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:13:44.445650 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:13:44.445657 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:13:44.445664 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:13:44.445670 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:13:44.445679 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:13:44.445686 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:13:44.445693 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:13:44.445700 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:13:44.445706 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:13:44.445714 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:13:44.445722 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:13:44.445730 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:13:44.445737 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:13:44.445744 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:13:44.445751 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:13:44.445777 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:13:44.445784 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:13:44.445792 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:13:44.445799 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:13:44.445806 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:13:44.445813 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:13:44.445819 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:13:44.445826 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:13:44.445833 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:13:44.445841 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:13:44.445848 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:13:44.445855 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:13:44.445862 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:13:44.445869 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:13:44.445891 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:13:44.445898 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 12:13:44.445905 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:13:44.445914 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:13:44.445922 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:13:44.445930 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:13:44.445937 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:13:44.445980 systemd-journald[1544]: Collecting audit messages is enabled. Dec 16 12:13:44.445999 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:13:44.446008 systemd-journald[1544]: Journal started Dec 16 12:13:44.446024 systemd-journald[1544]: Runtime Journal (/run/log/journal/88ee45fd5b9c450d8a7a976a24e24257) is 8M, max 78.3M, 70.3M free. Dec 16 12:13:43.380000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 12:13:43.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:43.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:43.998000 audit: BPF prog-id=14 op=UNLOAD Dec 16 12:13:43.998000 audit: BPF prog-id=13 op=UNLOAD Dec 16 12:13:43.998000 audit: BPF prog-id=15 op=LOAD Dec 16 12:13:43.998000 audit: BPF prog-id=16 op=LOAD Dec 16 12:13:43.998000 audit: BPF prog-id=17 op=LOAD Dec 16 12:13:44.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.431000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 12:13:44.431000 audit[1544]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffd78c03c0 a2=4000 a3=0 items=0 ppid=1 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:13:44.431000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 12:13:42.762260 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:13:42.782224 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 12:13:42.787161 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:13:42.787470 systemd[1]: systemd-journald.service: Consumed 3.101s CPU time. Dec 16 12:13:44.462856 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:13:44.475381 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:13:44.483832 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:13:44.496392 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:13:44.511632 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:13:44.523197 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:13:44.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.524369 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:13:44.524560 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:13:44.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.530263 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:13:44.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.535690 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:13:44.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.542585 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:13:44.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.549274 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:13:44.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.564534 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:13:44.571295 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:13:44.581893 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:13:44.588107 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:13:44.593638 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:13:44.653921 kernel: fuse: init (API version 7.41) Dec 16 12:13:44.654191 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:13:44.655807 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:13:44.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.761166 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:13:44.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.766839 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:13:44.777774 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:13:44.857889 systemd-journald[1544]: Time spent on flushing to /var/log/journal/88ee45fd5b9c450d8a7a976a24e24257 is 16.013ms for 1079 entries. Dec 16 12:13:44.857889 systemd-journald[1544]: System Journal (/var/log/journal/88ee45fd5b9c450d8a7a976a24e24257) is 8M, max 2.2G, 2.2G free. Dec 16 12:13:45.105998 systemd-journald[1544]: Received client request to flush runtime journal. Dec 16 12:13:45.106040 kernel: loop1: detected capacity change from 0 to 27544 Dec 16 12:13:44.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:44.966145 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:13:45.108815 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:13:45.115549 systemd-tmpfiles[1561]: ACLs are not supported, ignoring. Dec 16 12:13:45.115564 systemd-tmpfiles[1561]: ACLs are not supported, ignoring. Dec 16 12:13:45.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:45.119199 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:13:45.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:45.131776 kernel: ACPI: bus type drm_connector registered Dec 16 12:13:45.132748 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:13:45.133018 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:13:45.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:45.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:45.142218 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:13:45.156381 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:13:46.377938 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:13:46.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:46.385240 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:13:47.530064 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:13:47.531455 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:13:47.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:48.279787 kernel: loop2: detected capacity change from 0 to 200800 Dec 16 12:13:48.541897 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:13:48.566144 kernel: kauditd_printk_skb: 36 callbacks suppressed Dec 16 12:13:48.566294 kernel: audit: type=1130 audit(1765887228.546:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:48.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:48.551000 audit: BPF prog-id=18 op=LOAD Dec 16 12:13:48.572838 kernel: audit: type=1334 audit(1765887228.551:139): prog-id=18 op=LOAD Dec 16 12:13:48.567940 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 12:13:48.566000 audit: BPF prog-id=19 op=LOAD Dec 16 12:13:48.583887 kernel: audit: type=1334 audit(1765887228.566:140): prog-id=19 op=LOAD Dec 16 12:13:48.566000 audit: BPF prog-id=20 op=LOAD Dec 16 12:13:48.588771 kernel: audit: type=1334 audit(1765887228.566:141): prog-id=20 op=LOAD Dec 16 12:13:48.589000 audit: BPF prog-id=21 op=LOAD Dec 16 12:13:48.877235 kernel: audit: type=1334 audit(1765887228.589:142): prog-id=21 op=LOAD Dec 16 12:13:48.877341 kernel: audit: type=1334 audit(1765887228.615:143): prog-id=22 op=LOAD Dec 16 12:13:48.877357 kernel: audit: type=1334 audit(1765887228.615:144): prog-id=23 op=LOAD Dec 16 12:13:48.877372 kernel: audit: type=1334 audit(1765887228.615:145): prog-id=24 op=LOAD Dec 16 12:13:48.877384 kernel: audit: type=1334 audit(1765887228.638:146): prog-id=25 op=LOAD Dec 16 12:13:48.877396 kernel: audit: type=1334 audit(1765887228.643:147): prog-id=26 op=LOAD Dec 16 12:13:48.615000 audit: BPF prog-id=22 op=LOAD Dec 16 12:13:48.615000 audit: BPF prog-id=23 op=LOAD Dec 16 12:13:48.615000 audit: BPF prog-id=24 op=LOAD Dec 16 12:13:48.638000 audit: BPF prog-id=25 op=LOAD Dec 16 12:13:48.643000 audit: BPF prog-id=26 op=LOAD Dec 16 12:13:48.643000 audit: BPF prog-id=27 op=LOAD Dec 16 12:13:48.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:48.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:48.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:48.595354 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:13:48.607161 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:13:48.620925 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 12:13:48.639350 systemd-tmpfiles[1629]: ACLs are not supported, ignoring. Dec 16 12:13:48.639358 systemd-tmpfiles[1629]: ACLs are not supported, ignoring. Dec 16 12:13:48.646983 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:13:48.660017 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:13:48.676886 systemd-nsresourced[1630]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 12:13:48.678128 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 12:13:48.688256 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:13:48.902428 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:13:48.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:48.907000 audit: BPF prog-id=8 op=UNLOAD Dec 16 12:13:48.907000 audit: BPF prog-id=7 op=UNLOAD Dec 16 12:13:48.908000 audit: BPF prog-id=28 op=LOAD Dec 16 12:13:48.908000 audit: BPF prog-id=29 op=LOAD Dec 16 12:13:48.910072 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:13:48.927791 kernel: loop3: detected capacity change from 0 to 100192 Dec 16 12:13:48.944087 systemd-udevd[1646]: Using default interface naming scheme 'v257'. Dec 16 12:13:49.250980 systemd-oomd[1626]: No swap; memory pressure usage will be degraded Dec 16 12:13:49.251961 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 12:13:49.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:49.303246 systemd-resolved[1627]: Positive Trust Anchors: Dec 16 12:13:49.303266 systemd-resolved[1627]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:13:49.303269 systemd-resolved[1627]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:13:49.303288 systemd-resolved[1627]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:13:50.280915 systemd-resolved[1627]: Using system hostname 'ci-4547.0.0-a-623de6ebc0'. Dec 16 12:13:50.282320 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:13:50.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:50.287623 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:13:50.637612 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:13:50.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:50.646000 audit: BPF prog-id=30 op=LOAD Dec 16 12:13:50.648461 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:13:50.698250 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:13:50.904818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#46 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:13:51.343901 kernel: hv_vmbus: registering driver hyperv_fb Dec 16 12:13:51.344021 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 16 12:13:51.353475 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 16 12:13:51.353817 kernel: Console: switching to colour dummy device 80x25 Dec 16 12:13:51.364040 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 12:13:51.416131 kernel: hv_vmbus: registering driver hv_balloon Dec 16 12:13:51.416233 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 12:13:51.416263 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 16 12:13:51.424693 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 16 12:13:51.603928 systemd-networkd[1659]: lo: Link UP Dec 16 12:13:51.603937 systemd-networkd[1659]: lo: Gained carrier Dec 16 12:13:51.606165 systemd-networkd[1659]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:13:51.606173 systemd-networkd[1659]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:13:51.606188 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:13:51.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:51.612008 systemd[1]: Reached target network.target - Network. Dec 16 12:13:51.617840 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:13:51.626881 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:13:51.676775 kernel: mlx5_core 2e2f:00:02.0 enP11823s1: Link up Dec 16 12:13:51.705785 kernel: hv_netvsc 0022487b-7438-0022-487b-74380022487b eth0: Data path switched to VF: enP11823s1 Dec 16 12:13:51.707138 systemd-networkd[1659]: enP11823s1: Link UP Dec 16 12:13:51.707467 systemd-networkd[1659]: eth0: Link UP Dec 16 12:13:51.707478 systemd-networkd[1659]: eth0: Gained carrier Dec 16 12:13:51.707494 systemd-networkd[1659]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:13:51.715307 systemd-networkd[1659]: enP11823s1: Gained carrier Dec 16 12:13:51.724799 systemd-networkd[1659]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:13:51.749988 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:13:51.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:51.766990 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:13:51.777869 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:13:51.778828 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:13:51.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:51.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:51.785858 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:13:51.819780 kernel: MACsec IEEE 802.1AE Dec 16 12:13:51.882775 kernel: loop4: detected capacity change from 0 to 45344 Dec 16 12:13:51.963443 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 12:13:51.970296 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:13:52.051288 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:13:52.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:52.196939 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:13:52.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:52.241782 kernel: loop5: detected capacity change from 0 to 27544 Dec 16 12:13:52.260790 kernel: loop6: detected capacity change from 0 to 200800 Dec 16 12:13:52.285781 kernel: loop7: detected capacity change from 0 to 100192 Dec 16 12:13:52.301965 kernel: loop1: detected capacity change from 0 to 45344 Dec 16 12:13:52.316209 (sd-merge)[1780]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Dec 16 12:13:52.319088 (sd-merge)[1780]: Merged extensions into '/usr'. Dec 16 12:13:52.322647 systemd[1]: Reload requested from client PID 1560 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:13:52.322663 systemd[1]: Reloading... Dec 16 12:13:52.382789 zram_generator::config[1817]: No configuration found. Dec 16 12:13:52.557042 systemd[1]: Reloading finished in 234 ms. Dec 16 12:13:52.570283 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:13:52.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:52.594954 systemd[1]: Starting ensure-sysext.service... Dec 16 12:13:52.600924 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:13:52.606000 audit: BPF prog-id=31 op=LOAD Dec 16 12:13:52.606000 audit: BPF prog-id=15 op=UNLOAD Dec 16 12:13:52.606000 audit: BPF prog-id=32 op=LOAD Dec 16 12:13:52.606000 audit: BPF prog-id=33 op=LOAD Dec 16 12:13:52.606000 audit: BPF prog-id=16 op=UNLOAD Dec 16 12:13:52.606000 audit: BPF prog-id=17 op=UNLOAD Dec 16 12:13:52.616000 audit: BPF prog-id=34 op=LOAD Dec 16 12:13:52.616000 audit: BPF prog-id=35 op=LOAD Dec 16 12:13:52.616000 audit: BPF prog-id=28 op=UNLOAD Dec 16 12:13:52.616000 audit: BPF prog-id=29 op=UNLOAD Dec 16 12:13:52.617000 audit: BPF prog-id=36 op=LOAD Dec 16 12:13:52.617000 audit: BPF prog-id=18 op=UNLOAD Dec 16 12:13:52.617000 audit: BPF prog-id=37 op=LOAD Dec 16 12:13:52.617000 audit: BPF prog-id=38 op=LOAD Dec 16 12:13:52.617000 audit: BPF prog-id=19 op=UNLOAD Dec 16 12:13:52.618000 audit: BPF prog-id=20 op=UNLOAD Dec 16 12:13:52.618000 audit: BPF prog-id=39 op=LOAD Dec 16 12:13:52.618000 audit: BPF prog-id=22 op=UNLOAD Dec 16 12:13:52.618000 audit: BPF prog-id=40 op=LOAD Dec 16 12:13:52.619000 audit: BPF prog-id=41 op=LOAD Dec 16 12:13:52.619000 audit: BPF prog-id=23 op=UNLOAD Dec 16 12:13:52.619000 audit: BPF prog-id=24 op=UNLOAD Dec 16 12:13:52.619000 audit: BPF prog-id=42 op=LOAD Dec 16 12:13:52.619000 audit: BPF prog-id=30 op=UNLOAD Dec 16 12:13:52.620000 audit: BPF prog-id=43 op=LOAD Dec 16 12:13:52.620000 audit: BPF prog-id=21 op=UNLOAD Dec 16 12:13:52.621000 audit: BPF prog-id=44 op=LOAD Dec 16 12:13:52.621000 audit: BPF prog-id=25 op=UNLOAD Dec 16 12:13:52.621000 audit: BPF prog-id=45 op=LOAD Dec 16 12:13:52.621000 audit: BPF prog-id=46 op=LOAD Dec 16 12:13:52.621000 audit: BPF prog-id=26 op=UNLOAD Dec 16 12:13:52.621000 audit: BPF prog-id=27 op=UNLOAD Dec 16 12:13:52.626019 systemd-tmpfiles[1869]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:13:52.626050 systemd-tmpfiles[1869]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:13:52.626534 systemd-tmpfiles[1869]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:13:52.626884 systemd[1]: Reload requested from client PID 1868 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:13:52.627070 systemd[1]: Reloading... Dec 16 12:13:52.627202 systemd-tmpfiles[1869]: ACLs are not supported, ignoring. Dec 16 12:13:52.627232 systemd-tmpfiles[1869]: ACLs are not supported, ignoring. Dec 16 12:13:52.687961 zram_generator::config[1906]: No configuration found. Dec 16 12:13:52.697021 systemd-tmpfiles[1869]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:13:52.697031 systemd-tmpfiles[1869]: Skipping /boot Dec 16 12:13:52.703376 systemd-tmpfiles[1869]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:13:52.703520 systemd-tmpfiles[1869]: Skipping /boot Dec 16 12:13:52.846256 systemd[1]: Reloading finished in 218 ms. Dec 16 12:13:52.866264 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:13:52.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:52.873000 audit: BPF prog-id=47 op=LOAD Dec 16 12:13:52.873000 audit: BPF prog-id=43 op=UNLOAD Dec 16 12:13:52.873000 audit: BPF prog-id=48 op=LOAD Dec 16 12:13:52.873000 audit: BPF prog-id=49 op=LOAD Dec 16 12:13:52.874000 audit: BPF prog-id=34 op=UNLOAD Dec 16 12:13:52.874000 audit: BPF prog-id=35 op=UNLOAD Dec 16 12:13:52.874000 audit: BPF prog-id=50 op=LOAD Dec 16 12:13:52.874000 audit: BPF prog-id=39 op=UNLOAD Dec 16 12:13:52.874000 audit: BPF prog-id=51 op=LOAD Dec 16 12:13:52.874000 audit: BPF prog-id=52 op=LOAD Dec 16 12:13:52.874000 audit: BPF prog-id=40 op=UNLOAD Dec 16 12:13:52.874000 audit: BPF prog-id=41 op=UNLOAD Dec 16 12:13:52.874000 audit: BPF prog-id=53 op=LOAD Dec 16 12:13:52.874000 audit: BPF prog-id=42 op=UNLOAD Dec 16 12:13:52.875000 audit: BPF prog-id=54 op=LOAD Dec 16 12:13:52.875000 audit: BPF prog-id=31 op=UNLOAD Dec 16 12:13:52.875000 audit: BPF prog-id=55 op=LOAD Dec 16 12:13:52.875000 audit: BPF prog-id=56 op=LOAD Dec 16 12:13:52.875000 audit: BPF prog-id=32 op=UNLOAD Dec 16 12:13:52.875000 audit: BPF prog-id=33 op=UNLOAD Dec 16 12:13:52.876000 audit: BPF prog-id=57 op=LOAD Dec 16 12:13:52.876000 audit: BPF prog-id=44 op=UNLOAD Dec 16 12:13:52.876000 audit: BPF prog-id=58 op=LOAD Dec 16 12:13:52.876000 audit: BPF prog-id=59 op=LOAD Dec 16 12:13:52.876000 audit: BPF prog-id=45 op=UNLOAD Dec 16 12:13:52.876000 audit: BPF prog-id=46 op=UNLOAD Dec 16 12:13:52.876000 audit: BPF prog-id=60 op=LOAD Dec 16 12:13:52.876000 audit: BPF prog-id=36 op=UNLOAD Dec 16 12:13:52.876000 audit: BPF prog-id=61 op=LOAD Dec 16 12:13:52.876000 audit: BPF prog-id=62 op=LOAD Dec 16 12:13:52.876000 audit: BPF prog-id=37 op=UNLOAD Dec 16 12:13:52.876000 audit: BPF prog-id=38 op=UNLOAD Dec 16 12:13:52.893247 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:13:52.905686 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:13:52.912878 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:13:52.918711 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:13:52.927395 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:13:52.938000 audit[1964]: SYSTEM_BOOT pid=1964 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 12:13:52.942235 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:13:52.943422 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:13:52.951612 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:13:52.960734 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:13:52.967417 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:13:52.967950 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:13:52.968285 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:13:52.970145 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:13:52.970577 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:13:52.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:52.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:52.980485 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:13:52.980688 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:13:52.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:52.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:52.987124 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:13:52.987284 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:13:52.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:52.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:52.995980 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:13:53.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:53.002845 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:13:53.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:53.012703 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:13:53.014083 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:13:53.021987 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:13:53.028977 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:13:53.033850 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:13:53.034025 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:13:53.034117 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:13:53.034961 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:13:53.035170 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:13:53.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:53.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:53.041922 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:13:53.042115 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:13:53.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:53.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:53.048745 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:13:53.049060 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:13:53.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:53.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:53.059218 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:13:53.060553 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:13:53.069657 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:13:53.076940 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:13:53.083010 systemd-networkd[1659]: eth0: Gained IPv6LL Dec 16 12:13:53.085989 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:13:53.093211 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:13:53.093407 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:13:53.093487 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:13:53.093594 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:13:53.101477 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:13:53.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:53.107953 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:13:53.108159 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:13:53.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:53.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:53.113864 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:13:53.114055 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:13:53.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:53.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:53.119683 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:13:53.119865 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:13:53.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:53.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:53.126081 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:13:53.126260 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:13:53.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:53.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:53.135541 systemd[1]: Finished ensure-sysext.service. Dec 16 12:13:53.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:53.142019 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:13:53.148197 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:13:53.148269 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:13:53.261000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 12:13:53.261000 audit[2008]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc8058120 a2=420 a3=0 items=0 ppid=1960 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:13:53.261000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:13:53.262733 augenrules[2008]: No rules Dec 16 12:13:53.263893 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:13:53.264163 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:13:53.666391 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:13:53.672514 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:14:03.490773 ldconfig[1962]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:14:03.502825 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:14:03.510544 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:14:03.526806 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:14:03.532276 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:14:03.538419 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:14:03.544465 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:14:03.551006 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:14:03.556441 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:14:03.562480 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 12:14:03.568837 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 12:14:03.573914 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:14:03.579741 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:14:03.579831 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:14:03.584513 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:14:03.589662 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:14:03.595637 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:14:03.602307 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:14:03.608347 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:14:03.614557 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:14:03.627464 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:14:03.632772 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:14:03.639047 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:14:03.644627 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:14:03.648994 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:14:03.653730 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:14:03.653778 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:14:03.656345 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 12:14:03.671511 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:14:03.677727 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 12:14:03.687925 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:14:03.696093 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:14:03.703862 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:14:03.710883 chronyd[2020]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 12:14:03.712778 chronyd[2020]: Timezone right/UTC failed leap second check, ignoring Dec 16 12:14:03.713112 chronyd[2020]: Loaded seccomp filter (level 2) Dec 16 12:14:03.714936 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:14:03.716614 jq[2028]: false Dec 16 12:14:03.720247 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:14:03.721230 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 16 12:14:03.725834 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 16 12:14:03.726744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:14:03.729984 KVP[2030]: KVP starting; pid is:2030 Dec 16 12:14:03.734785 KVP[2030]: KVP LIC Version: 3.1 Dec 16 12:14:03.736770 kernel: hv_utils: KVP IC version 4.0 Dec 16 12:14:03.740251 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:14:03.745958 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:14:03.752898 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:14:03.758922 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:14:03.765133 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:14:03.771825 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:14:03.776085 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:14:03.776456 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:14:03.778906 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:14:03.783960 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:14:03.790073 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 12:14:03.792099 jq[2042]: true Dec 16 12:14:03.798108 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:14:03.798324 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:14:03.821682 jq[2050]: true Dec 16 12:14:03.849124 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:14:03.860199 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:14:03.862656 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:14:03.872569 systemd-logind[2038]: New seat seat0. Dec 16 12:14:03.879938 systemd-logind[2038]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 16 12:14:03.880176 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:14:03.891826 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:14:03.906467 update_engine[2039]: I20251216 12:14:03.906378 2039 main.cc:92] Flatcar Update Engine starting Dec 16 12:14:03.918439 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:14:03.918714 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:14:04.052929 extend-filesystems[2029]: Found /dev/sda6 Dec 16 12:14:04.079956 tar[2049]: linux-arm64/LICENSE Dec 16 12:14:04.080244 tar[2049]: linux-arm64/helm Dec 16 12:14:04.103531 bash[2083]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:14:04.104730 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:14:04.106298 extend-filesystems[2029]: Found /dev/sda9 Dec 16 12:14:04.127156 extend-filesystems[2029]: Checking size of /dev/sda9 Dec 16 12:14:04.120963 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:14:04.142809 dbus-daemon[2023]: [system] SELinux support is enabled Dec 16 12:14:04.143082 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:14:04.153045 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:14:04.153271 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:14:04.166193 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:14:04.172893 update_engine[2039]: I20251216 12:14:04.166347 2039 update_check_scheduler.cc:74] Next update check in 3m9s Dec 16 12:14:04.166215 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:14:04.180945 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:14:04.186080 dbus-daemon[2023]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 12:14:04.186919 extend-filesystems[2029]: Resized partition /dev/sda9 Dec 16 12:14:04.199747 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:14:04.212586 extend-filesystems[2113]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:14:04.284084 coreos-metadata[2022]: Dec 16 12:14:04.232 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 12:14:04.284689 coreos-metadata[2022]: Dec 16 12:14:04.284 INFO Fetch successful Dec 16 12:14:04.284689 coreos-metadata[2022]: Dec 16 12:14:04.284 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 16 12:14:04.284689 coreos-metadata[2022]: Dec 16 12:14:04.284 INFO Fetch successful Dec 16 12:14:04.284689 coreos-metadata[2022]: Dec 16 12:14:04.284 INFO Fetching http://168.63.129.16/machine/f0298efb-54dc-406a-957c-fcf1d6d5eb87/b173d3b8%2D289f%2D40b3%2Dad6c%2D52f6ed4fbf24.%5Fci%2D4547.0.0%2Da%2D623de6ebc0?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 16 12:14:04.284689 coreos-metadata[2022]: Dec 16 12:14:04.284 INFO Fetch successful Dec 16 12:14:04.284689 coreos-metadata[2022]: Dec 16 12:14:04.284 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 16 12:14:04.284689 coreos-metadata[2022]: Dec 16 12:14:04.284 INFO Fetch successful Dec 16 12:14:04.320958 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 12:14:04.327648 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:14:04.330270 sshd_keygen[2079]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:14:04.373996 kernel: EXT4-fs (sda9): resizing filesystem from 6359552 to 6376955 blocks Dec 16 12:14:04.374083 kernel: EXT4-fs (sda9): resized filesystem to 6376955 Dec 16 12:14:04.382713 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:14:04.806714 containerd[2077]: time="2025-12-16T12:14:04Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:14:04.399029 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:14:04.408974 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 16 12:14:04.416742 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:14:04.416959 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:14:04.446725 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:14:04.643893 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:14:04.651202 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:14:04.658107 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:14:04.663688 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:14:04.798894 locksmithd[2114]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:14:04.810771 containerd[2077]: time="2025-12-16T12:14:04.809628944Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 12:14:04.825211 containerd[2077]: time="2025-12-16T12:14:04.824618344Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.24µs" Dec 16 12:14:04.825359 containerd[2077]: time="2025-12-16T12:14:04.825338080Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:14:04.825464 containerd[2077]: time="2025-12-16T12:14:04.825452584Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:14:04.826076 containerd[2077]: time="2025-12-16T12:14:04.826055864Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:14:04.826301 containerd[2077]: time="2025-12-16T12:14:04.826280152Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:14:04.828246 containerd[2077]: time="2025-12-16T12:14:04.826348720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:14:04.828246 containerd[2077]: time="2025-12-16T12:14:04.826989544Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:14:04.828246 containerd[2077]: time="2025-12-16T12:14:04.827006504Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:14:04.828246 containerd[2077]: time="2025-12-16T12:14:04.827224184Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:14:04.828246 containerd[2077]: time="2025-12-16T12:14:04.827236976Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:14:04.828246 containerd[2077]: time="2025-12-16T12:14:04.827244408Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:14:04.828246 containerd[2077]: time="2025-12-16T12:14:04.827249024Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:14:04.828246 containerd[2077]: time="2025-12-16T12:14:04.827382160Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:14:04.828246 containerd[2077]: time="2025-12-16T12:14:04.827390040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:14:04.829248 containerd[2077]: time="2025-12-16T12:14:04.828516072Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:14:04.829248 containerd[2077]: time="2025-12-16T12:14:04.828701624Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:14:04.829248 containerd[2077]: time="2025-12-16T12:14:04.828724280Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:14:04.829248 containerd[2077]: time="2025-12-16T12:14:04.828731904Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:14:04.829348 extend-filesystems[2113]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 12:14:04.829348 extend-filesystems[2113]: old_desc_blocks = 4, new_desc_blocks = 4 Dec 16 12:14:04.829348 extend-filesystems[2113]: The filesystem on /dev/sda9 is now 6376955 (4k) blocks long. Dec 16 12:14:04.864830 containerd[2077]: time="2025-12-16T12:14:04.835041512Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:14:04.864830 containerd[2077]: time="2025-12-16T12:14:04.835981600Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:14:04.864830 containerd[2077]: time="2025-12-16T12:14:04.836091488Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:14:04.864906 extend-filesystems[2029]: Resized filesystem in /dev/sda9 Dec 16 12:14:04.832696 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:14:04.833378 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:14:04.850992 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 16 12:14:04.879948 containerd[2077]: time="2025-12-16T12:14:04.879869808Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:14:04.880165 containerd[2077]: time="2025-12-16T12:14:04.880068128Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:14:04.881429 containerd[2077]: time="2025-12-16T12:14:04.880364888Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:14:04.881429 containerd[2077]: time="2025-12-16T12:14:04.880386960Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:14:04.881429 containerd[2077]: time="2025-12-16T12:14:04.880400240Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:14:04.881429 containerd[2077]: time="2025-12-16T12:14:04.880411456Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:14:04.881429 containerd[2077]: time="2025-12-16T12:14:04.880425824Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:14:04.881429 containerd[2077]: time="2025-12-16T12:14:04.880432600Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:14:04.881429 containerd[2077]: time="2025-12-16T12:14:04.880440744Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:14:04.881429 containerd[2077]: time="2025-12-16T12:14:04.880450904Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:14:04.881429 containerd[2077]: time="2025-12-16T12:14:04.880459552Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:14:04.881429 containerd[2077]: time="2025-12-16T12:14:04.880467616Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:14:04.881429 containerd[2077]: time="2025-12-16T12:14:04.880473728Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:14:04.881429 containerd[2077]: time="2025-12-16T12:14:04.880481960Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:14:04.881429 containerd[2077]: time="2025-12-16T12:14:04.880651160Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:14:04.881644 containerd[2077]: time="2025-12-16T12:14:04.880669600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:14:04.881644 containerd[2077]: time="2025-12-16T12:14:04.880681912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:14:04.881644 containerd[2077]: time="2025-12-16T12:14:04.880688824Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:14:04.881644 containerd[2077]: time="2025-12-16T12:14:04.880697648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:14:04.881644 containerd[2077]: time="2025-12-16T12:14:04.880705264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:14:04.881644 containerd[2077]: time="2025-12-16T12:14:04.880713824Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:14:04.881644 containerd[2077]: time="2025-12-16T12:14:04.880720584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:14:04.881644 containerd[2077]: time="2025-12-16T12:14:04.880727464Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:14:04.881644 containerd[2077]: time="2025-12-16T12:14:04.880734128Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:14:04.881644 containerd[2077]: time="2025-12-16T12:14:04.880740232Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:14:04.881644 containerd[2077]: time="2025-12-16T12:14:04.880788160Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:14:04.881644 containerd[2077]: time="2025-12-16T12:14:04.880823704Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:14:04.881644 containerd[2077]: time="2025-12-16T12:14:04.880833640Z" level=info msg="Start snapshots syncer" Dec 16 12:14:04.881644 containerd[2077]: time="2025-12-16T12:14:04.880851376Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:14:04.881824 containerd[2077]: time="2025-12-16T12:14:04.881051480Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:14:04.881824 containerd[2077]: time="2025-12-16T12:14:04.881090216Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:14:04.881901 containerd[2077]: time="2025-12-16T12:14:04.881127416Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:14:04.881901 containerd[2077]: time="2025-12-16T12:14:04.881222968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:14:04.881901 containerd[2077]: time="2025-12-16T12:14:04.881237432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:14:04.881901 containerd[2077]: time="2025-12-16T12:14:04.881245848Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:14:04.881901 containerd[2077]: time="2025-12-16T12:14:04.881252336Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:14:04.881901 containerd[2077]: time="2025-12-16T12:14:04.881260248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:14:04.881901 containerd[2077]: time="2025-12-16T12:14:04.881266496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:14:04.881901 containerd[2077]: time="2025-12-16T12:14:04.881276184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:14:04.881901 containerd[2077]: time="2025-12-16T12:14:04.881282768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:14:04.881901 containerd[2077]: time="2025-12-16T12:14:04.881289344Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:14:04.881901 containerd[2077]: time="2025-12-16T12:14:04.881311160Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:14:04.881901 containerd[2077]: time="2025-12-16T12:14:04.881320720Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:14:04.881901 containerd[2077]: time="2025-12-16T12:14:04.881326040Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:14:04.882043 containerd[2077]: time="2025-12-16T12:14:04.881331552Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:14:04.882043 containerd[2077]: time="2025-12-16T12:14:04.881335952Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:14:04.882043 containerd[2077]: time="2025-12-16T12:14:04.881341520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:14:04.882043 containerd[2077]: time="2025-12-16T12:14:04.881348168Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:14:04.882043 containerd[2077]: time="2025-12-16T12:14:04.881361392Z" level=info msg="runtime interface created" Dec 16 12:14:04.882043 containerd[2077]: time="2025-12-16T12:14:04.881367320Z" level=info msg="created NRI interface" Dec 16 12:14:04.882043 containerd[2077]: time="2025-12-16T12:14:04.881372256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:14:04.882043 containerd[2077]: time="2025-12-16T12:14:04.881379648Z" level=info msg="Connect containerd service" Dec 16 12:14:04.882043 containerd[2077]: time="2025-12-16T12:14:04.881393592Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:14:04.886779 containerd[2077]: time="2025-12-16T12:14:04.886476544Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:14:04.948160 tar[2049]: linux-arm64/README.md Dec 16 12:14:04.963162 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:14:05.070716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:14:05.251097 containerd[2077]: time="2025-12-16T12:14:05.251029480Z" level=info msg="Start subscribing containerd event" Dec 16 12:14:05.251097 containerd[2077]: time="2025-12-16T12:14:05.251098008Z" level=info msg="Start recovering state" Dec 16 12:14:05.251222 containerd[2077]: time="2025-12-16T12:14:05.251189208Z" level=info msg="Start event monitor" Dec 16 12:14:05.251222 containerd[2077]: time="2025-12-16T12:14:05.251199040Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:14:05.251222 containerd[2077]: time="2025-12-16T12:14:05.251204208Z" level=info msg="Start streaming server" Dec 16 12:14:05.251222 containerd[2077]: time="2025-12-16T12:14:05.251210696Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:14:05.251222 containerd[2077]: time="2025-12-16T12:14:05.251217120Z" level=info msg="runtime interface starting up..." Dec 16 12:14:05.251222 containerd[2077]: time="2025-12-16T12:14:05.251221264Z" level=info msg="starting plugins..." Dec 16 12:14:05.251363 containerd[2077]: time="2025-12-16T12:14:05.251232264Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:14:05.251555 containerd[2077]: time="2025-12-16T12:14:05.251445648Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:14:05.251555 containerd[2077]: time="2025-12-16T12:14:05.251510272Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:14:05.251643 containerd[2077]: time="2025-12-16T12:14:05.251632040Z" level=info msg="containerd successfully booted in 0.488431s" Dec 16 12:14:05.252005 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:14:05.260548 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:14:05.269640 systemd[1]: Startup finished in 2.830s (kernel) + 12.996s (initrd) + 33.191s (userspace) = 49.017s. Dec 16 12:14:05.294692 (kubelet)[2228]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:14:05.602828 kubelet[2228]: E1216 12:14:05.602771 2228 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:14:05.604719 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:14:05.604845 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:14:05.607971 systemd[1]: kubelet.service: Consumed 512ms CPU time, 247.6M memory peak. Dec 16 12:14:05.971187 login[2197]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:14:05.971577 login[2198]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:14:05.977801 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:14:05.978922 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:14:05.984477 systemd-logind[2038]: New session 2 of user core. Dec 16 12:14:05.988432 systemd-logind[2038]: New session 1 of user core. Dec 16 12:14:06.007407 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:14:06.010294 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:14:06.027286 (systemd)[2249]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:14:06.029778 systemd-logind[2038]: New session 3 of user core. Dec 16 12:14:06.172988 systemd[2249]: Queued start job for default target default.target. Dec 16 12:14:06.180671 systemd[2249]: Created slice app.slice - User Application Slice. Dec 16 12:14:06.180708 systemd[2249]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 12:14:06.180718 systemd[2249]: Reached target paths.target - Paths. Dec 16 12:14:06.180792 systemd[2249]: Reached target timers.target - Timers. Dec 16 12:14:06.182061 systemd[2249]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:14:06.182641 systemd[2249]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 12:14:06.193371 systemd[2249]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:14:06.193452 systemd[2249]: Reached target sockets.target - Sockets. Dec 16 12:14:06.194718 systemd[2249]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 12:14:06.194841 systemd[2249]: Reached target basic.target - Basic System. Dec 16 12:14:06.194897 systemd[2249]: Reached target default.target - Main User Target. Dec 16 12:14:06.194920 systemd[2249]: Startup finished in 160ms. Dec 16 12:14:06.195085 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:14:06.199965 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:14:06.200540 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:14:06.566800 waagent[2211]: 2025-12-16T12:14:06.566683Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 16 12:14:06.571634 waagent[2211]: 2025-12-16T12:14:06.571580Z INFO Daemon Daemon OS: flatcar 4547.0.0 Dec 16 12:14:06.575370 waagent[2211]: 2025-12-16T12:14:06.575333Z INFO Daemon Daemon Python: 3.11.13 Dec 16 12:14:06.579017 waagent[2211]: 2025-12-16T12:14:06.578958Z INFO Daemon Daemon Run daemon Dec 16 12:14:06.582301 waagent[2211]: 2025-12-16T12:14:06.582263Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4547.0.0' Dec 16 12:14:06.589996 waagent[2211]: 2025-12-16T12:14:06.589946Z INFO Daemon Daemon Using waagent for provisioning Dec 16 12:14:06.594638 waagent[2211]: 2025-12-16T12:14:06.594597Z INFO Daemon Daemon Activate resource disk Dec 16 12:14:06.598580 waagent[2211]: 2025-12-16T12:14:06.598544Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 16 12:14:06.607230 waagent[2211]: 2025-12-16T12:14:06.607182Z INFO Daemon Daemon Found device: None Dec 16 12:14:06.610755 waagent[2211]: 2025-12-16T12:14:06.610715Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 16 12:14:06.617477 waagent[2211]: 2025-12-16T12:14:06.617438Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 16 12:14:06.626553 waagent[2211]: 2025-12-16T12:14:06.626508Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 12:14:06.631242 waagent[2211]: 2025-12-16T12:14:06.631208Z INFO Daemon Daemon Running default provisioning handler Dec 16 12:14:06.640531 waagent[2211]: 2025-12-16T12:14:06.640481Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 16 12:14:06.652412 waagent[2211]: 2025-12-16T12:14:06.652353Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 16 12:14:06.660701 waagent[2211]: 2025-12-16T12:14:06.660656Z INFO Daemon Daemon cloud-init is enabled: False Dec 16 12:14:06.664895 waagent[2211]: 2025-12-16T12:14:06.664859Z INFO Daemon Daemon Copying ovf-env.xml Dec 16 12:14:06.730803 waagent[2211]: 2025-12-16T12:14:06.730040Z INFO Daemon Daemon Successfully mounted dvd Dec 16 12:14:06.757017 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 16 12:14:06.760787 waagent[2211]: 2025-12-16T12:14:06.759782Z INFO Daemon Daemon Detect protocol endpoint Dec 16 12:14:06.764282 waagent[2211]: 2025-12-16T12:14:06.764231Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 12:14:06.769343 waagent[2211]: 2025-12-16T12:14:06.769303Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 16 12:14:06.774567 waagent[2211]: 2025-12-16T12:14:06.774527Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 16 12:14:06.779182 waagent[2211]: 2025-12-16T12:14:06.779143Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 16 12:14:06.783616 waagent[2211]: 2025-12-16T12:14:06.783582Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 16 12:14:06.797169 waagent[2211]: 2025-12-16T12:14:06.797128Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 16 12:14:06.802635 waagent[2211]: 2025-12-16T12:14:06.802610Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 16 12:14:06.806606 waagent[2211]: 2025-12-16T12:14:06.806578Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 16 12:14:06.869640 waagent[2211]: 2025-12-16T12:14:06.869494Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 16 12:14:06.874821 waagent[2211]: 2025-12-16T12:14:06.874764Z INFO Daemon Daemon Forcing an update of the goal state. Dec 16 12:14:06.883655 waagent[2211]: 2025-12-16T12:14:06.883602Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 12:14:06.903910 waagent[2211]: 2025-12-16T12:14:06.903872Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Dec 16 12:14:06.909525 waagent[2211]: 2025-12-16T12:14:06.909486Z INFO Daemon Dec 16 12:14:06.911938 waagent[2211]: 2025-12-16T12:14:06.911905Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 9639d83a-5b4c-4813-9460-c470d195015a eTag: 6297779690619066855 source: Fabric] Dec 16 12:14:06.920899 waagent[2211]: 2025-12-16T12:14:06.920860Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 16 12:14:06.926398 waagent[2211]: 2025-12-16T12:14:06.926366Z INFO Daemon Dec 16 12:14:06.928550 waagent[2211]: 2025-12-16T12:14:06.928520Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 16 12:14:06.938504 waagent[2211]: 2025-12-16T12:14:06.938473Z INFO Daemon Daemon Downloading artifacts profile blob Dec 16 12:14:07.004296 waagent[2211]: 2025-12-16T12:14:07.004216Z INFO Daemon Downloaded certificate {'thumbprint': '80C9F0F97906C21FDEB3C795F72C31F485CCBF08', 'hasPrivateKey': True} Dec 16 12:14:07.011986 waagent[2211]: 2025-12-16T12:14:07.011938Z INFO Daemon Fetch goal state completed Dec 16 12:14:07.022701 waagent[2211]: 2025-12-16T12:14:07.022661Z INFO Daemon Daemon Starting provisioning Dec 16 12:14:07.026794 waagent[2211]: 2025-12-16T12:14:07.026749Z INFO Daemon Daemon Handle ovf-env.xml. Dec 16 12:14:07.030705 waagent[2211]: 2025-12-16T12:14:07.030674Z INFO Daemon Daemon Set hostname [ci-4547.0.0-a-623de6ebc0] Dec 16 12:14:07.040742 waagent[2211]: 2025-12-16T12:14:07.037830Z INFO Daemon Daemon Publish hostname [ci-4547.0.0-a-623de6ebc0] Dec 16 12:14:07.043235 waagent[2211]: 2025-12-16T12:14:07.043191Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 16 12:14:07.048978 waagent[2211]: 2025-12-16T12:14:07.048927Z INFO Daemon Daemon Primary interface is [eth0] Dec 16 12:14:07.059312 systemd-networkd[1659]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:14:07.059321 systemd-networkd[1659]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:14:07.059410 systemd-networkd[1659]: eth0: DHCP lease lost Dec 16 12:14:07.071775 waagent[2211]: 2025-12-16T12:14:07.071634Z INFO Daemon Daemon Create user account if not exists Dec 16 12:14:07.076555 waagent[2211]: 2025-12-16T12:14:07.076498Z INFO Daemon Daemon User core already exists, skip useradd Dec 16 12:14:07.081429 waagent[2211]: 2025-12-16T12:14:07.081370Z INFO Daemon Daemon Configure sudoer Dec 16 12:14:07.088829 systemd-networkd[1659]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:14:07.090178 waagent[2211]: 2025-12-16T12:14:07.089497Z INFO Daemon Daemon Configure sshd Dec 16 12:14:07.097588 waagent[2211]: 2025-12-16T12:14:07.097528Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 16 12:14:07.108168 waagent[2211]: 2025-12-16T12:14:07.108110Z INFO Daemon Daemon Deploy ssh public key. Dec 16 12:14:08.236846 waagent[2211]: 2025-12-16T12:14:08.236794Z INFO Daemon Daemon Provisioning complete Dec 16 12:14:08.254304 waagent[2211]: 2025-12-16T12:14:08.254259Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 16 12:14:08.259635 waagent[2211]: 2025-12-16T12:14:08.259596Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 16 12:14:08.267464 waagent[2211]: 2025-12-16T12:14:08.267427Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 16 12:14:08.368222 waagent[2302]: 2025-12-16T12:14:08.368146Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 16 12:14:08.369830 waagent[2302]: 2025-12-16T12:14:08.368624Z INFO ExtHandler ExtHandler OS: flatcar 4547.0.0 Dec 16 12:14:08.369830 waagent[2302]: 2025-12-16T12:14:08.368685Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 16 12:14:08.369830 waagent[2302]: 2025-12-16T12:14:08.368727Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Dec 16 12:14:08.402442 waagent[2302]: 2025-12-16T12:14:08.402373Z INFO ExtHandler ExtHandler Distro: flatcar-4547.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 16 12:14:08.402791 waagent[2302]: 2025-12-16T12:14:08.402732Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:14:08.402965 waagent[2302]: 2025-12-16T12:14:08.402934Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:14:08.409471 waagent[2302]: 2025-12-16T12:14:08.409417Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 12:14:08.415355 waagent[2302]: 2025-12-16T12:14:08.415313Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Dec 16 12:14:08.415915 waagent[2302]: 2025-12-16T12:14:08.415878Z INFO ExtHandler Dec 16 12:14:08.416075 waagent[2302]: 2025-12-16T12:14:08.416047Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 91a9c3e8-15d0-4186-b9e1-9448bb2e0c4f eTag: 6297779690619066855 source: Fabric] Dec 16 12:14:08.416402 waagent[2302]: 2025-12-16T12:14:08.416371Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 12:14:08.416961 waagent[2302]: 2025-12-16T12:14:08.416926Z INFO ExtHandler Dec 16 12:14:08.417078 waagent[2302]: 2025-12-16T12:14:08.417054Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 16 12:14:08.421121 waagent[2302]: 2025-12-16T12:14:08.421092Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 12:14:08.490651 waagent[2302]: 2025-12-16T12:14:08.490517Z INFO ExtHandler Downloaded certificate {'thumbprint': '80C9F0F97906C21FDEB3C795F72C31F485CCBF08', 'hasPrivateKey': True} Dec 16 12:14:08.491066 waagent[2302]: 2025-12-16T12:14:08.491027Z INFO ExtHandler Fetch goal state completed Dec 16 12:14:08.504582 waagent[2302]: 2025-12-16T12:14:08.504519Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.4 30 Sep 2025 (Library: OpenSSL 3.5.4 30 Sep 2025) Dec 16 12:14:08.508248 waagent[2302]: 2025-12-16T12:14:08.508196Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2302 Dec 16 12:14:08.508363 waagent[2302]: 2025-12-16T12:14:08.508336Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 16 12:14:08.508624 waagent[2302]: 2025-12-16T12:14:08.508594Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 16 12:14:08.509790 waagent[2302]: 2025-12-16T12:14:08.509721Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4547.0.0', '', 'Flatcar Container Linux by Kinvolk'] Dec 16 12:14:08.510141 waagent[2302]: 2025-12-16T12:14:08.510105Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4547.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 16 12:14:08.510263 waagent[2302]: 2025-12-16T12:14:08.510238Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 16 12:14:08.510693 waagent[2302]: 2025-12-16T12:14:08.510660Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 16 12:14:08.572721 waagent[2302]: 2025-12-16T12:14:08.572682Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 16 12:14:08.572954 waagent[2302]: 2025-12-16T12:14:08.572921Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 16 12:14:08.578126 waagent[2302]: 2025-12-16T12:14:08.578088Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 16 12:14:08.583306 systemd[1]: Reload requested from client PID 2317 ('systemctl') (unit waagent.service)... Dec 16 12:14:08.583536 systemd[1]: Reloading... Dec 16 12:14:08.657823 zram_generator::config[2359]: No configuration found. Dec 16 12:14:08.819302 systemd[1]: Reloading finished in 235 ms. Dec 16 12:14:08.847996 waagent[2302]: 2025-12-16T12:14:08.847927Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 16 12:14:08.848104 waagent[2302]: 2025-12-16T12:14:08.848078Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 16 12:14:11.028581 waagent[2302]: 2025-12-16T12:14:11.028492Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 16 12:14:11.028931 waagent[2302]: 2025-12-16T12:14:11.028854Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 16 12:14:11.029543 waagent[2302]: 2025-12-16T12:14:11.029498Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 16 12:14:11.029882 waagent[2302]: 2025-12-16T12:14:11.029799Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 16 12:14:11.030669 waagent[2302]: 2025-12-16T12:14:11.030064Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:14:11.030669 waagent[2302]: 2025-12-16T12:14:11.030137Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:14:11.030669 waagent[2302]: 2025-12-16T12:14:11.030295Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 16 12:14:11.030669 waagent[2302]: 2025-12-16T12:14:11.030435Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 16 12:14:11.030669 waagent[2302]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 16 12:14:11.030669 waagent[2302]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 16 12:14:11.030669 waagent[2302]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 16 12:14:11.030669 waagent[2302]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:14:11.030669 waagent[2302]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:14:11.030669 waagent[2302]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:14:11.031007 waagent[2302]: 2025-12-16T12:14:11.030963Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 16 12:14:11.031062 waagent[2302]: 2025-12-16T12:14:11.031019Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 16 12:14:11.031542 waagent[2302]: 2025-12-16T12:14:11.031502Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 16 12:14:11.031586 waagent[2302]: 2025-12-16T12:14:11.031544Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 16 12:14:11.032002 waagent[2302]: 2025-12-16T12:14:11.031966Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:14:11.032135 waagent[2302]: 2025-12-16T12:14:11.032109Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 16 12:14:11.032248 waagent[2302]: 2025-12-16T12:14:11.032225Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:14:11.033018 waagent[2302]: 2025-12-16T12:14:11.032974Z INFO EnvHandler ExtHandler Configure routes Dec 16 12:14:11.033505 waagent[2302]: 2025-12-16T12:14:11.033472Z INFO EnvHandler ExtHandler Gateway:None Dec 16 12:14:11.033549 waagent[2302]: 2025-12-16T12:14:11.033532Z INFO EnvHandler ExtHandler Routes:None Dec 16 12:14:11.041339 waagent[2302]: 2025-12-16T12:14:11.041281Z INFO ExtHandler ExtHandler Dec 16 12:14:11.041419 waagent[2302]: 2025-12-16T12:14:11.041373Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 3e967f8b-34d5-4d47-9ac1-3ff1580bc52a correlation d88f64ec-49f4-4492-84ef-916a253a2c51 created: 2025-12-16T12:12:51.587753Z] Dec 16 12:14:11.041730 waagent[2302]: 2025-12-16T12:14:11.041687Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 12:14:11.042181 waagent[2302]: 2025-12-16T12:14:11.042149Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Dec 16 12:14:11.175313 waagent[2302]: 2025-12-16T12:14:11.174100Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 16 12:14:11.175313 waagent[2302]: Try `iptables -h' or 'iptables --help' for more information.) Dec 16 12:14:11.175313 waagent[2302]: 2025-12-16T12:14:11.175226Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: CC4D452C-B790-47BF-922E-26ED1EBE04DF;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 16 12:14:11.243177 waagent[2302]: 2025-12-16T12:14:11.242811Z INFO MonitorHandler ExtHandler Network interfaces: Dec 16 12:14:11.243177 waagent[2302]: Executing ['ip', '-a', '-o', 'link']: Dec 16 12:14:11.243177 waagent[2302]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 16 12:14:11.243177 waagent[2302]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:74:38 brd ff:ff:ff:ff:ff:ff\ altname enx0022487b7438 Dec 16 12:14:11.243177 waagent[2302]: 3: enP11823s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:74:38 brd ff:ff:ff:ff:ff:ff\ altname enP11823p0s2 Dec 16 12:14:11.243177 waagent[2302]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 16 12:14:11.243177 waagent[2302]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 16 12:14:11.243177 waagent[2302]: 2: eth0 inet 10.200.20.36/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 16 12:14:11.243177 waagent[2302]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 16 12:14:11.243177 waagent[2302]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 16 12:14:11.243177 waagent[2302]: 2: eth0 inet6 fe80::222:48ff:fe7b:7438/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 16 12:14:11.516795 waagent[2302]: 2025-12-16T12:14:11.516656Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 16 12:14:11.516795 waagent[2302]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:14:11.516795 waagent[2302]: pkts bytes target prot opt in out source destination Dec 16 12:14:11.516795 waagent[2302]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:14:11.516795 waagent[2302]: pkts bytes target prot opt in out source destination Dec 16 12:14:11.516795 waagent[2302]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:14:11.516795 waagent[2302]: pkts bytes target prot opt in out source destination Dec 16 12:14:11.516795 waagent[2302]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 12:14:11.516795 waagent[2302]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 12:14:11.516795 waagent[2302]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 12:14:11.519236 waagent[2302]: 2025-12-16T12:14:11.519182Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 16 12:14:11.519236 waagent[2302]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:14:11.519236 waagent[2302]: pkts bytes target prot opt in out source destination Dec 16 12:14:11.519236 waagent[2302]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:14:11.519236 waagent[2302]: pkts bytes target prot opt in out source destination Dec 16 12:14:11.519236 waagent[2302]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:14:11.519236 waagent[2302]: pkts bytes target prot opt in out source destination Dec 16 12:14:11.519236 waagent[2302]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 12:14:11.519236 waagent[2302]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 12:14:11.519236 waagent[2302]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 12:14:11.519442 waagent[2302]: 2025-12-16T12:14:11.519413Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 16 12:14:15.791397 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:14:15.793547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:14:15.906841 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:14:15.911992 (kubelet)[2455]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:14:15.998577 kubelet[2455]: E1216 12:14:15.998522 2455 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:14:16.001310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:14:16.001427 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:14:16.002002 systemd[1]: kubelet.service: Consumed 113ms CPU time, 107.5M memory peak. Dec 16 12:14:26.041383 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:14:26.042773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:14:26.361854 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:14:26.373011 (kubelet)[2470]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:14:26.401768 kubelet[2470]: E1216 12:14:26.401698 2470 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:14:26.404443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:14:26.404643 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:14:26.406839 systemd[1]: kubelet.service: Consumed 110ms CPU time, 107.2M memory peak. Dec 16 12:14:27.505796 chronyd[2020]: Selected source PHC0 Dec 16 12:14:36.541579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 12:14:36.543366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:14:36.763580 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:14:36.766363 (kubelet)[2485]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:14:36.792330 kubelet[2485]: E1216 12:14:36.792210 2485 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:14:36.795006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:14:36.795254 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:14:36.795899 systemd[1]: kubelet.service: Consumed 108ms CPU time, 106.9M memory peak. Dec 16 12:14:38.291374 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:14:38.292474 systemd[1]: Started sshd@0-10.200.20.36:22-10.200.16.10:46624.service - OpenSSH per-connection server daemon (10.200.16.10:46624). Dec 16 12:14:38.877439 sshd[2493]: Accepted publickey for core from 10.200.16.10 port 46624 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:14:38.878553 sshd-session[2493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:14:38.882935 systemd-logind[2038]: New session 4 of user core. Dec 16 12:14:38.891924 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:14:39.198561 systemd[1]: Started sshd@1-10.200.20.36:22-10.200.16.10:46626.service - OpenSSH per-connection server daemon (10.200.16.10:46626). Dec 16 12:14:39.537775 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 16 12:14:39.621812 sshd[2500]: Accepted publickey for core from 10.200.16.10 port 46626 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:14:39.622939 sshd-session[2500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:14:39.627736 systemd-logind[2038]: New session 5 of user core. Dec 16 12:14:39.634092 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:14:39.859000 sshd[2504]: Connection closed by 10.200.16.10 port 46626 Dec 16 12:14:39.858219 sshd-session[2500]: pam_unix(sshd:session): session closed for user core Dec 16 12:14:39.861403 systemd-logind[2038]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:14:39.862904 systemd[1]: sshd@1-10.200.20.36:22-10.200.16.10:46626.service: Deactivated successfully. Dec 16 12:14:39.866383 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:14:39.868193 systemd-logind[2038]: Removed session 5. Dec 16 12:14:39.949751 systemd[1]: Started sshd@2-10.200.20.36:22-10.200.16.10:46630.service - OpenSSH per-connection server daemon (10.200.16.10:46630). Dec 16 12:14:40.374723 sshd[2510]: Accepted publickey for core from 10.200.16.10 port 46630 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:14:40.375853 sshd-session[2510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:14:40.380699 systemd-logind[2038]: New session 6 of user core. Dec 16 12:14:40.387932 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:14:40.606474 sshd[2514]: Connection closed by 10.200.16.10 port 46630 Dec 16 12:14:40.607063 sshd-session[2510]: pam_unix(sshd:session): session closed for user core Dec 16 12:14:40.610891 systemd[1]: sshd@2-10.200.20.36:22-10.200.16.10:46630.service: Deactivated successfully. Dec 16 12:14:40.612666 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:14:40.614242 systemd-logind[2038]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:14:40.615185 systemd-logind[2038]: Removed session 6. Dec 16 12:14:40.693011 systemd[1]: Started sshd@3-10.200.20.36:22-10.200.16.10:43968.service - OpenSSH per-connection server daemon (10.200.16.10:43968). Dec 16 12:14:41.087312 sshd[2520]: Accepted publickey for core from 10.200.16.10 port 43968 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:14:41.088437 sshd-session[2520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:14:41.092429 systemd-logind[2038]: New session 7 of user core. Dec 16 12:14:41.099917 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:14:41.303903 sshd[2524]: Connection closed by 10.200.16.10 port 43968 Dec 16 12:14:41.304455 sshd-session[2520]: pam_unix(sshd:session): session closed for user core Dec 16 12:14:41.308662 systemd-logind[2038]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:14:41.309139 systemd[1]: sshd@3-10.200.20.36:22-10.200.16.10:43968.service: Deactivated successfully. Dec 16 12:14:41.312331 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:14:41.314428 systemd-logind[2038]: Removed session 7. Dec 16 12:14:41.397021 systemd[1]: Started sshd@4-10.200.20.36:22-10.200.16.10:43972.service - OpenSSH per-connection server daemon (10.200.16.10:43972). Dec 16 12:14:41.819571 sshd[2530]: Accepted publickey for core from 10.200.16.10 port 43972 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:14:41.820656 sshd-session[2530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:14:41.824823 systemd-logind[2038]: New session 8 of user core. Dec 16 12:14:41.831905 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:14:42.115307 sudo[2535]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:14:42.115546 sudo[2535]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:14:42.141590 sudo[2535]: pam_unix(sudo:session): session closed for user root Dec 16 12:14:42.219525 sshd[2534]: Connection closed by 10.200.16.10 port 43972 Dec 16 12:14:42.219374 sshd-session[2530]: pam_unix(sshd:session): session closed for user core Dec 16 12:14:42.223264 systemd-logind[2038]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:14:42.223951 systemd[1]: sshd@4-10.200.20.36:22-10.200.16.10:43972.service: Deactivated successfully. Dec 16 12:14:42.225361 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:14:42.227030 systemd-logind[2038]: Removed session 8. Dec 16 12:14:42.312821 systemd[1]: Started sshd@5-10.200.20.36:22-10.200.16.10:43988.service - OpenSSH per-connection server daemon (10.200.16.10:43988). Dec 16 12:14:42.736010 sshd[2542]: Accepted publickey for core from 10.200.16.10 port 43988 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:14:42.737121 sshd-session[2542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:14:42.741064 systemd-logind[2038]: New session 9 of user core. Dec 16 12:14:42.753106 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:14:42.896293 sudo[2548]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:14:42.896504 sudo[2548]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:14:42.901071 sudo[2548]: pam_unix(sudo:session): session closed for user root Dec 16 12:14:42.905889 sudo[2547]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:14:42.906085 sudo[2547]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:14:42.912226 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:14:42.940000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:14:42.941955 augenrules[2572]: No rules Dec 16 12:14:42.944053 kernel: kauditd_printk_skb: 113 callbacks suppressed Dec 16 12:14:42.944104 kernel: audit: type=1305 audit(1765887282.940:259): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:14:42.945459 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:14:42.945831 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:14:42.952939 sudo[2547]: pam_unix(sudo:session): session closed for user root Dec 16 12:14:42.940000 audit[2572]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdb2ddfa0 a2=420 a3=0 items=0 ppid=2553 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:42.971303 kernel: audit: type=1300 audit(1765887282.940:259): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdb2ddfa0 a2=420 a3=0 items=0 ppid=2553 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:42.971387 kernel: audit: type=1327 audit(1765887282.940:259): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:14:42.940000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:14:42.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:42.990491 kernel: audit: type=1130 audit(1765887282.943:260): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:42.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:43.002590 kernel: audit: type=1131 audit(1765887282.943:261): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:42.952000 audit[2547]: USER_END pid=2547 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:14:43.016290 kernel: audit: type=1106 audit(1765887282.952:262): pid=2547 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:14:43.016333 kernel: audit: type=1104 audit(1765887282.952:263): pid=2547 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:14:42.952000 audit[2547]: CRED_DISP pid=2547 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:14:43.030933 sshd[2546]: Connection closed by 10.200.16.10 port 43988 Dec 16 12:14:43.030692 sshd-session[2542]: pam_unix(sshd:session): session closed for user core Dec 16 12:14:43.032000 audit[2542]: USER_END pid=2542 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:14:43.032000 audit[2542]: CRED_DISP pid=2542 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:14:43.070876 kernel: audit: type=1106 audit(1765887283.032:264): pid=2542 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:14:43.070944 kernel: audit: type=1104 audit(1765887283.032:265): pid=2542 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:14:43.054374 systemd[1]: sshd@5-10.200.20.36:22-10.200.16.10:43988.service: Deactivated successfully. Dec 16 12:14:43.056457 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:14:43.071603 systemd-logind[2038]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:14:43.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.36:22-10.200.16.10:43988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:43.087880 kernel: audit: type=1131 audit(1765887283.053:266): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.36:22-10.200.16.10:43988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:43.088106 systemd-logind[2038]: Removed session 9. Dec 16 12:14:43.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.36:22-10.200.16.10:43992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:43.126568 systemd[1]: Started sshd@6-10.200.20.36:22-10.200.16.10:43992.service - OpenSSH per-connection server daemon (10.200.16.10:43992). Dec 16 12:14:43.550000 audit[2581]: USER_ACCT pid=2581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:14:43.551699 sshd[2581]: Accepted publickey for core from 10.200.16.10 port 43992 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:14:43.551000 audit[2581]: CRED_ACQ pid=2581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:14:43.551000 audit[2581]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff5605ea0 a2=3 a3=0 items=0 ppid=1 pid=2581 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:43.551000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:14:43.552881 sshd-session[2581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:14:43.556512 systemd-logind[2038]: New session 10 of user core. Dec 16 12:14:43.566913 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:14:43.568000 audit[2581]: USER_START pid=2581 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:14:43.569000 audit[2585]: CRED_ACQ pid=2585 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:14:43.710000 audit[2586]: USER_ACCT pid=2586 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:14:43.710943 sudo[2586]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:14:43.710000 audit[2586]: CRED_REFR pid=2586 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:14:43.711343 sudo[2586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:14:43.710000 audit[2586]: USER_START pid=2586 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.022367 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:14:45.030992 (dockerd)[2604]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:14:46.691768 dockerd[2604]: time="2025-12-16T12:14:46.689918385Z" level=info msg="Starting up" Dec 16 12:14:46.692776 dockerd[2604]: time="2025-12-16T12:14:46.692565179Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:14:46.702401 dockerd[2604]: time="2025-12-16T12:14:46.702371013Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:14:46.829392 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 12:14:46.830572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:14:46.843706 dockerd[2604]: time="2025-12-16T12:14:46.843669478Z" level=info msg="Loading containers: start." Dec 16 12:14:46.872818 kernel: Initializing XFRM netlink socket Dec 16 12:14:46.961000 audit[2653]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2653 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:46.961000 audit[2653]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd9665d30 a2=0 a3=0 items=0 ppid=2604 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:46.961000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:14:46.963000 audit[2655]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2655 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:46.963000 audit[2655]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff935f570 a2=0 a3=0 items=0 ppid=2604 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:46.963000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:14:46.964000 audit[2657]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2657 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:46.964000 audit[2657]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffecf6e560 a2=0 a3=0 items=0 ppid=2604 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:46.964000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:14:46.966000 audit[2659]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2659 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:46.966000 audit[2659]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffe83da20 a2=0 a3=0 items=0 ppid=2604 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:46.966000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:14:46.968000 audit[2661]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=2661 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:46.968000 audit[2661]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffce4a7cf0 a2=0 a3=0 items=0 ppid=2604 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:46.968000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:14:46.969000 audit[2663]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=2663 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:46.969000 audit[2663]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd9899940 a2=0 a3=0 items=0 ppid=2604 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:46.969000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:14:46.971000 audit[2665]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2665 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:46.971000 audit[2665]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc0ab7120 a2=0 a3=0 items=0 ppid=2604 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:46.971000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:14:47.164829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:14:47.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:47.167833 (kubelet)[2674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:14:47.195017 kubelet[2674]: E1216 12:14:47.194965 2674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:14:47.197104 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:14:47.197324 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:14:47.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:14:47.199795 systemd[1]: kubelet.service: Consumed 108ms CPU time, 106.9M memory peak. Dec 16 12:14:46.973000 audit[2667]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=2667 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:46.973000 audit[2667]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffffe2d3ed0 a2=0 a3=0 items=0 ppid=2604 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:46.973000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:14:47.356000 audit[2682]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2682 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:47.356000 audit[2682]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffdb263660 a2=0 a3=0 items=0 ppid=2604 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.356000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 12:14:47.357000 audit[2684]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=2684 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:47.357000 audit[2684]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd74c24f0 a2=0 a3=0 items=0 ppid=2604 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.357000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:14:47.359000 audit[2686]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2686 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:47.359000 audit[2686]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffc92a6770 a2=0 a3=0 items=0 ppid=2604 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.359000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:14:47.361000 audit[2688]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2688 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:47.361000 audit[2688]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=fffff76e7f00 a2=0 a3=0 items=0 ppid=2604 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.361000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:14:47.362000 audit[2690]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=2690 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:47.362000 audit[2690]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffff8364bd0 a2=0 a3=0 items=0 ppid=2604 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.362000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:14:47.419000 audit[2720]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=2720 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:47.419000 audit[2720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd9be0630 a2=0 a3=0 items=0 ppid=2604 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.419000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:14:47.421000 audit[2722]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=2722 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:47.421000 audit[2722]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffedd4f510 a2=0 a3=0 items=0 ppid=2604 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.421000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:14:47.423000 audit[2724]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2724 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:47.423000 audit[2724]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff477e300 a2=0 a3=0 items=0 ppid=2604 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.423000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:14:47.424000 audit[2726]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2726 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:47.424000 audit[2726]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd4d9e8d0 a2=0 a3=0 items=0 ppid=2604 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.424000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:14:47.426000 audit[2728]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=2728 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:47.426000 audit[2728]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff18d7c70 a2=0 a3=0 items=0 ppid=2604 pid=2728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.426000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:14:47.428000 audit[2730]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=2730 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:47.428000 audit[2730]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffdcb37f0 a2=0 a3=0 items=0 ppid=2604 pid=2730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.428000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:14:47.430000 audit[2732]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=2732 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:47.430000 audit[2732]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe2b29c80 a2=0 a3=0 items=0 ppid=2604 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.430000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:14:47.431000 audit[2734]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=2734 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:47.431000 audit[2734]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffecd89060 a2=0 a3=0 items=0 ppid=2604 pid=2734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.431000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:14:47.433000 audit[2736]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=2736 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:47.433000 audit[2736]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=fffff443cc50 a2=0 a3=0 items=0 ppid=2604 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.433000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 12:14:47.435000 audit[2738]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=2738 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:47.435000 audit[2738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffdf3c3d50 a2=0 a3=0 items=0 ppid=2604 pid=2738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.435000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:14:47.437000 audit[2740]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=2740 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:47.437000 audit[2740]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffffc59dff0 a2=0 a3=0 items=0 ppid=2604 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.437000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:14:47.439000 audit[2742]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=2742 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:47.439000 audit[2742]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffeceb6610 a2=0 a3=0 items=0 ppid=2604 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.439000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:14:47.440000 audit[2744]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=2744 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:47.440000 audit[2744]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffffb548f80 a2=0 a3=0 items=0 ppid=2604 pid=2744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.440000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:14:47.445000 audit[2749]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=2749 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:47.445000 audit[2749]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd2b8de00 a2=0 a3=0 items=0 ppid=2604 pid=2749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.445000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:14:47.446000 audit[2751]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2751 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:47.446000 audit[2751]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffe82314f0 a2=0 a3=0 items=0 ppid=2604 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.446000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:14:47.448000 audit[2753]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2753 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:47.448000 audit[2753]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe0af0420 a2=0 a3=0 items=0 ppid=2604 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.448000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:14:47.449000 audit[2755]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2755 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:47.449000 audit[2755]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffff5fe3c0 a2=0 a3=0 items=0 ppid=2604 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.449000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:14:47.451000 audit[2757]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=2757 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:47.451000 audit[2757]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffee327c60 a2=0 a3=0 items=0 ppid=2604 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.451000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:14:47.453000 audit[2759]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=2759 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:47.453000 audit[2759]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffca14a820 a2=0 a3=0 items=0 ppid=2604 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.453000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:14:47.524000 audit[2764]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=2764 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:47.524000 audit[2764]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffdd7f79a0 a2=0 a3=0 items=0 ppid=2604 pid=2764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.524000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 12:14:47.526000 audit[2766]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=2766 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:47.526000 audit[2766]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffc4b9dd80 a2=0 a3=0 items=0 ppid=2604 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.526000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 12:14:47.533000 audit[2774]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2774 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:47.533000 audit[2774]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=fffff88352e0 a2=0 a3=0 items=0 ppid=2604 pid=2774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.533000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 12:14:47.537000 audit[2779]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2779 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:47.537000 audit[2779]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffdb4d39f0 a2=0 a3=0 items=0 ppid=2604 pid=2779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.537000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 12:14:47.539000 audit[2781]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2781 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:47.539000 audit[2781]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffe51b8da0 a2=0 a3=0 items=0 ppid=2604 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.539000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 12:14:47.540000 audit[2783]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2783 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:47.540000 audit[2783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd7fc3350 a2=0 a3=0 items=0 ppid=2604 pid=2783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.540000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 12:14:47.542000 audit[2785]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=2785 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:47.542000 audit[2785]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=fffffc822660 a2=0 a3=0 items=0 ppid=2604 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.542000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:14:47.544000 audit[2787]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=2787 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:47.544000 audit[2787]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffce710660 a2=0 a3=0 items=0 ppid=2604 pid=2787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:47.544000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 12:14:47.546845 systemd-networkd[1659]: docker0: Link UP Dec 16 12:14:47.562427 dockerd[2604]: time="2025-12-16T12:14:47.561889116Z" level=info msg="Loading containers: done." Dec 16 12:14:47.572924 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3832710243-merged.mount: Deactivated successfully. Dec 16 12:14:47.605785 dockerd[2604]: time="2025-12-16T12:14:47.605665838Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:14:47.606029 dockerd[2604]: time="2025-12-16T12:14:47.605829911Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:14:47.606029 dockerd[2604]: time="2025-12-16T12:14:47.605948939Z" level=info msg="Initializing buildkit" Dec 16 12:14:47.648622 dockerd[2604]: time="2025-12-16T12:14:47.648579569Z" level=info msg="Completed buildkit initialization" Dec 16 12:14:47.653676 dockerd[2604]: time="2025-12-16T12:14:47.653630117Z" level=info msg="Daemon has completed initialization" Dec 16 12:14:47.653799 dockerd[2604]: time="2025-12-16T12:14:47.653690915Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:14:47.654215 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:14:47.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:48.317885 containerd[2077]: time="2025-12-16T12:14:48.317846317Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 12:14:49.140167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount319750710.mount: Deactivated successfully. Dec 16 12:14:49.254647 update_engine[2039]: I20251216 12:14:49.254179 2039 update_attempter.cc:509] Updating boot flags... Dec 16 12:14:49.988791 containerd[2077]: time="2025-12-16T12:14:49.988084847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:49.991722 containerd[2077]: time="2025-12-16T12:14:49.991684305Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=23059636" Dec 16 12:14:49.994633 containerd[2077]: time="2025-12-16T12:14:49.994608800Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:49.998885 containerd[2077]: time="2025-12-16T12:14:49.998860435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:49.999407 containerd[2077]: time="2025-12-16T12:14:49.999384208Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 1.681502735s" Dec 16 12:14:49.999494 containerd[2077]: time="2025-12-16T12:14:49.999482746Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Dec 16 12:14:50.000093 containerd[2077]: time="2025-12-16T12:14:50.000052243Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 12:14:51.090797 containerd[2077]: time="2025-12-16T12:14:51.090196193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:51.093946 containerd[2077]: time="2025-12-16T12:14:51.093920775Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19130075" Dec 16 12:14:51.096991 containerd[2077]: time="2025-12-16T12:14:51.096967786Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:51.102123 containerd[2077]: time="2025-12-16T12:14:51.102084860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:51.103392 containerd[2077]: time="2025-12-16T12:14:51.102768249Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.102676746s" Dec 16 12:14:51.103392 containerd[2077]: time="2025-12-16T12:14:51.102794019Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Dec 16 12:14:51.103640 containerd[2077]: time="2025-12-16T12:14:51.103622695Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 12:14:52.073823 containerd[2077]: time="2025-12-16T12:14:52.073750967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:52.076991 containerd[2077]: time="2025-12-16T12:14:52.076950491Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14183580" Dec 16 12:14:52.080164 containerd[2077]: time="2025-12-16T12:14:52.080122533Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:52.085308 containerd[2077]: time="2025-12-16T12:14:52.085263630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:52.086427 containerd[2077]: time="2025-12-16T12:14:52.086386864Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 982.576343ms" Dec 16 12:14:52.086550 containerd[2077]: time="2025-12-16T12:14:52.086510037Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Dec 16 12:14:52.087212 containerd[2077]: time="2025-12-16T12:14:52.087061525Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 12:14:53.169432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3213643339.mount: Deactivated successfully. Dec 16 12:14:53.395228 containerd[2077]: time="2025-12-16T12:14:53.394770081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:53.398858 containerd[2077]: time="2025-12-16T12:14:53.398801946Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22801532" Dec 16 12:14:53.401785 containerd[2077]: time="2025-12-16T12:14:53.401721659Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:53.405398 containerd[2077]: time="2025-12-16T12:14:53.405356764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:53.405948 containerd[2077]: time="2025-12-16T12:14:53.405608517Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.318452966s" Dec 16 12:14:53.405948 containerd[2077]: time="2025-12-16T12:14:53.405638544Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Dec 16 12:14:53.406121 containerd[2077]: time="2025-12-16T12:14:53.406082869Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 12:14:54.478661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1225747702.mount: Deactivated successfully. Dec 16 12:14:55.248795 containerd[2077]: time="2025-12-16T12:14:55.248658884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:55.252681 containerd[2077]: time="2025-12-16T12:14:55.252633143Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=19661099" Dec 16 12:14:55.256824 containerd[2077]: time="2025-12-16T12:14:55.255992556Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:55.260100 containerd[2077]: time="2025-12-16T12:14:55.260070633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:55.260650 containerd[2077]: time="2025-12-16T12:14:55.260619761Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.854504425s" Dec 16 12:14:55.260650 containerd[2077]: time="2025-12-16T12:14:55.260648884Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Dec 16 12:14:55.261500 containerd[2077]: time="2025-12-16T12:14:55.261481257Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 12:14:55.803158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4233644896.mount: Deactivated successfully. Dec 16 12:14:55.825860 containerd[2077]: time="2025-12-16T12:14:55.825804231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:55.828786 containerd[2077]: time="2025-12-16T12:14:55.828690156Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=892" Dec 16 12:14:55.831724 containerd[2077]: time="2025-12-16T12:14:55.831679379Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:55.835841 containerd[2077]: time="2025-12-16T12:14:55.835785532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:55.836582 containerd[2077]: time="2025-12-16T12:14:55.836146912Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 574.564757ms" Dec 16 12:14:55.836582 containerd[2077]: time="2025-12-16T12:14:55.836176547Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Dec 16 12:14:55.836690 containerd[2077]: time="2025-12-16T12:14:55.836661412Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 12:14:56.658875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3383223979.mount: Deactivated successfully. Dec 16 12:14:57.291380 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 16 12:14:57.292930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:14:58.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:58.302900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:14:58.306629 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 16 12:14:58.306688 kernel: audit: type=1130 audit(1765887298.302:319): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:58.319446 (kubelet)[3076]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:14:58.345628 kubelet[3076]: E1216 12:14:58.345575 3076 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:14:58.347870 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:14:58.348092 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:14:58.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:14:58.348535 systemd[1]: kubelet.service: Consumed 112ms CPU time, 106.9M memory peak. Dec 16 12:14:58.361786 kernel: audit: type=1131 audit(1765887298.347:320): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:14:59.430365 containerd[2077]: time="2025-12-16T12:14:59.430313527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:59.433655 containerd[2077]: time="2025-12-16T12:14:59.433611678Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=96314798" Dec 16 12:14:59.437563 containerd[2077]: time="2025-12-16T12:14:59.437520570Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:59.441771 containerd[2077]: time="2025-12-16T12:14:59.441697826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:59.442452 containerd[2077]: time="2025-12-16T12:14:59.442423820Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.605741181s" Dec 16 12:14:59.442541 containerd[2077]: time="2025-12-16T12:14:59.442527510Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Dec 16 12:15:03.323740 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:15:03.323885 systemd[1]: kubelet.service: Consumed 112ms CPU time, 106.9M memory peak. Dec 16 12:15:03.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:03.327973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:15:03.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:03.352141 kernel: audit: type=1130 audit(1765887303.323:321): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:03.352244 kernel: audit: type=1131 audit(1765887303.323:322): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:03.367875 systemd[1]: Reload requested from client PID 3116 ('systemctl') (unit session-10.scope)... Dec 16 12:15:03.367886 systemd[1]: Reloading... Dec 16 12:15:03.440790 zram_generator::config[3165]: No configuration found. Dec 16 12:15:03.624157 systemd[1]: Reloading finished in 256 ms. Dec 16 12:15:03.649000 audit: BPF prog-id=87 op=LOAD Dec 16 12:15:03.649000 audit: BPF prog-id=73 op=UNLOAD Dec 16 12:15:03.649000 audit: BPF prog-id=88 op=LOAD Dec 16 12:15:03.649000 audit: BPF prog-id=89 op=LOAD Dec 16 12:15:03.649000 audit: BPF prog-id=74 op=UNLOAD Dec 16 12:15:03.649000 audit: BPF prog-id=75 op=UNLOAD Dec 16 12:15:03.649000 audit: BPF prog-id=90 op=LOAD Dec 16 12:15:03.649000 audit: BPF prog-id=81 op=UNLOAD Dec 16 12:15:03.649000 audit: BPF prog-id=91 op=LOAD Dec 16 12:15:03.649000 audit: BPF prog-id=92 op=LOAD Dec 16 12:15:03.649000 audit: BPF prog-id=82 op=UNLOAD Dec 16 12:15:03.649000 audit: BPF prog-id=83 op=UNLOAD Dec 16 12:15:03.655000 audit: BPF prog-id=93 op=LOAD Dec 16 12:15:03.660474 kernel: audit: type=1334 audit(1765887303.649:323): prog-id=87 op=LOAD Dec 16 12:15:03.660556 kernel: audit: type=1334 audit(1765887303.649:324): prog-id=73 op=UNLOAD Dec 16 12:15:03.660575 kernel: audit: type=1334 audit(1765887303.649:325): prog-id=88 op=LOAD Dec 16 12:15:03.660591 kernel: audit: type=1334 audit(1765887303.649:326): prog-id=89 op=LOAD Dec 16 12:15:03.660605 kernel: audit: type=1334 audit(1765887303.649:327): prog-id=74 op=UNLOAD Dec 16 12:15:03.660626 kernel: audit: type=1334 audit(1765887303.649:328): prog-id=75 op=UNLOAD Dec 16 12:15:03.660640 kernel: audit: type=1334 audit(1765887303.649:329): prog-id=90 op=LOAD Dec 16 12:15:03.660652 kernel: audit: type=1334 audit(1765887303.649:330): prog-id=81 op=UNLOAD Dec 16 12:15:03.655000 audit: BPF prog-id=77 op=UNLOAD Dec 16 12:15:03.660000 audit: BPF prog-id=94 op=LOAD Dec 16 12:15:03.660000 audit: BPF prog-id=76 op=UNLOAD Dec 16 12:15:03.664000 audit: BPF prog-id=95 op=LOAD Dec 16 12:15:03.664000 audit: BPF prog-id=67 op=UNLOAD Dec 16 12:15:03.668000 audit: BPF prog-id=96 op=LOAD Dec 16 12:15:03.669000 audit: BPF prog-id=97 op=LOAD Dec 16 12:15:03.669000 audit: BPF prog-id=68 op=UNLOAD Dec 16 12:15:03.669000 audit: BPF prog-id=69 op=UNLOAD Dec 16 12:15:03.673000 audit: BPF prog-id=98 op=LOAD Dec 16 12:15:03.673000 audit: BPF prog-id=84 op=UNLOAD Dec 16 12:15:03.673000 audit: BPF prog-id=99 op=LOAD Dec 16 12:15:03.677000 audit: BPF prog-id=100 op=LOAD Dec 16 12:15:03.677000 audit: BPF prog-id=85 op=UNLOAD Dec 16 12:15:03.677000 audit: BPF prog-id=86 op=UNLOAD Dec 16 12:15:03.686000 audit: BPF prog-id=101 op=LOAD Dec 16 12:15:03.686000 audit: BPF prog-id=78 op=UNLOAD Dec 16 12:15:03.686000 audit: BPF prog-id=102 op=LOAD Dec 16 12:15:03.687000 audit: BPF prog-id=103 op=LOAD Dec 16 12:15:03.687000 audit: BPF prog-id=79 op=UNLOAD Dec 16 12:15:03.687000 audit: BPF prog-id=80 op=UNLOAD Dec 16 12:15:03.687000 audit: BPF prog-id=104 op=LOAD Dec 16 12:15:03.691000 audit: BPF prog-id=70 op=UNLOAD Dec 16 12:15:03.691000 audit: BPF prog-id=105 op=LOAD Dec 16 12:15:03.691000 audit: BPF prog-id=106 op=LOAD Dec 16 12:15:03.691000 audit: BPF prog-id=71 op=UNLOAD Dec 16 12:15:03.691000 audit: BPF prog-id=72 op=UNLOAD Dec 16 12:15:03.704223 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:15:03.704283 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:15:03.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:15:03.704768 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:15:03.704819 systemd[1]: kubelet.service: Consumed 110ms CPU time, 95.1M memory peak. Dec 16 12:15:03.706396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:15:03.869068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:15:03.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:03.872360 (kubelet)[3232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:15:03.901791 kubelet[3232]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:15:03.901791 kubelet[3232]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:15:03.997608 kubelet[3232]: I1216 12:15:03.997528 3232 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:15:04.314096 kubelet[3232]: I1216 12:15:04.313962 3232 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 12:15:04.314096 kubelet[3232]: I1216 12:15:04.313991 3232 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:15:04.315793 kubelet[3232]: I1216 12:15:04.315109 3232 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 12:15:04.315793 kubelet[3232]: I1216 12:15:04.315129 3232 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:15:04.315793 kubelet[3232]: I1216 12:15:04.315355 3232 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:15:04.400107 kubelet[3232]: E1216 12:15:04.400031 3232 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:15:04.402462 kubelet[3232]: I1216 12:15:04.402440 3232 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:15:04.411100 kubelet[3232]: I1216 12:15:04.411074 3232 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:15:04.414022 kubelet[3232]: I1216 12:15:04.413999 3232 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 12:15:04.414233 kubelet[3232]: I1216 12:15:04.414204 3232 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:15:04.414459 kubelet[3232]: I1216 12:15:04.414231 3232 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-a-623de6ebc0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:15:04.414459 kubelet[3232]: I1216 12:15:04.414380 3232 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:15:04.414459 kubelet[3232]: I1216 12:15:04.414388 3232 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 12:15:04.414592 kubelet[3232]: I1216 12:15:04.414499 3232 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 12:15:04.420465 kubelet[3232]: I1216 12:15:04.420305 3232 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:15:04.421527 kubelet[3232]: I1216 12:15:04.421506 3232 kubelet.go:475] "Attempting to sync node with API server" Dec 16 12:15:04.421560 kubelet[3232]: I1216 12:15:04.421531 3232 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:15:04.422157 kubelet[3232]: E1216 12:15:04.422131 3232 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.0.0-a-623de6ebc0&limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:15:04.423050 kubelet[3232]: I1216 12:15:04.422416 3232 kubelet.go:387] "Adding apiserver pod source" Dec 16 12:15:04.423050 kubelet[3232]: I1216 12:15:04.422431 3232 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:15:04.423313 kubelet[3232]: E1216 12:15:04.423135 3232 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:15:04.423571 kubelet[3232]: I1216 12:15:04.423550 3232 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:15:04.423975 kubelet[3232]: I1216 12:15:04.423952 3232 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:15:04.423975 kubelet[3232]: I1216 12:15:04.423979 3232 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 12:15:04.424058 kubelet[3232]: W1216 12:15:04.424012 3232 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:15:04.426352 kubelet[3232]: I1216 12:15:04.426334 3232 server.go:1262] "Started kubelet" Dec 16 12:15:04.427456 kubelet[3232]: I1216 12:15:04.427424 3232 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:15:04.429066 kubelet[3232]: I1216 12:15:04.429044 3232 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:15:04.429909 kubelet[3232]: I1216 12:15:04.429891 3232 server.go:310] "Adding debug handlers to kubelet server" Dec 16 12:15:04.431000 audit[3247]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3247 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:04.431000 audit[3247]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff6576480 a2=0 a3=0 items=0 ppid=3232 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:04.431000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:15:04.432000 audit[3249]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3249 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:04.432000 audit[3249]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc0cf37b0 a2=0 a3=0 items=0 ppid=3232 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:04.432000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:15:04.433976 kubelet[3232]: I1216 12:15:04.433433 3232 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:15:04.433976 kubelet[3232]: I1216 12:15:04.433490 3232 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 12:15:04.433976 kubelet[3232]: I1216 12:15:04.433635 3232 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:15:04.434855 kubelet[3232]: I1216 12:15:04.434830 3232 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 12:15:04.435023 kubelet[3232]: E1216 12:15:04.435003 3232 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-623de6ebc0\" not found" Dec 16 12:15:04.435000 audit[3251]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:04.435000 audit[3251]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff22da1c0 a2=0 a3=0 items=0 ppid=3232 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:04.435000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:15:04.436959 kubelet[3232]: I1216 12:15:04.436930 3232 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 12:15:04.437021 kubelet[3232]: I1216 12:15:04.436979 3232 reconciler.go:29] "Reconciler: start to sync state" Dec 16 12:15:04.436000 audit[3253]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3253 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:04.436000 audit[3253]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffffac2d700 a2=0 a3=0 items=0 ppid=3232 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:04.436000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:15:04.443339 kubelet[3232]: I1216 12:15:04.442890 3232 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:15:04.443000 audit[3256]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3256 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:04.443000 audit[3256]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff9f64ec0 a2=0 a3=0 items=0 ppid=3232 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:04.443000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Dec 16 12:15:04.444345 kubelet[3232]: I1216 12:15:04.444078 3232 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 12:15:04.444000 audit[3257]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3257 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:04.444000 audit[3257]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffcd59c7d0 a2=0 a3=0 items=0 ppid=3232 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:04.444000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:15:04.445437 kubelet[3232]: I1216 12:15:04.445421 3232 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 12:15:04.445847 kubelet[3232]: I1216 12:15:04.445487 3232 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 12:15:04.445847 kubelet[3232]: I1216 12:15:04.445525 3232 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 12:15:04.445847 kubelet[3232]: E1216 12:15:04.445560 3232 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:15:04.445000 audit[3258]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3258 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:04.445000 audit[3258]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc0bf17d0 a2=0 a3=0 items=0 ppid=3232 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:04.445000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:15:04.447396 kubelet[3232]: E1216 12:15:04.447364 3232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-a-623de6ebc0?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="200ms" Dec 16 12:15:04.448648 kubelet[3232]: I1216 12:15:04.447895 3232 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:15:04.448648 kubelet[3232]: I1216 12:15:04.447983 3232 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:15:04.448648 kubelet[3232]: E1216 12:15:04.448473 3232 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:15:04.449686 kubelet[3232]: E1216 12:15:04.448518 3232 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.36:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547.0.0-a-623de6ebc0.1881b125b5b3c2b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.0.0-a-623de6ebc0,UID:ci-4547.0.0-a-623de6ebc0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.0.0-a-623de6ebc0,},FirstTimestamp:2025-12-16 12:15:04.426308278 +0000 UTC m=+0.551462234,LastTimestamp:2025-12-16 12:15:04.426308278 +0000 UTC m=+0.551462234,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.0.0-a-623de6ebc0,}" Dec 16 12:15:04.450000 audit[3261]: NETFILTER_CFG table=mangle:52 family=10 entries=1 op=nft_register_chain pid=3261 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:04.450000 audit[3261]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdf968900 a2=0 a3=0 items=0 ppid=3232 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:04.450000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:15:04.450000 audit[3260]: NETFILTER_CFG table=nat:53 family=2 entries=1 op=nft_register_chain pid=3260 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:04.450000 audit[3260]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffed97890 a2=0 a3=0 items=0 ppid=3232 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:04.450000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:15:04.451000 audit[3262]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3262 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:04.451000 audit[3262]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdc391420 a2=0 a3=0 items=0 ppid=3232 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:04.451000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:15:04.452806 kubelet[3232]: E1216 12:15:04.452783 3232 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:15:04.452000 audit[3263]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3263 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:04.452000 audit[3263]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdccb3760 a2=0 a3=0 items=0 ppid=3232 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:04.452000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:15:04.453667 kubelet[3232]: I1216 12:15:04.453650 3232 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:15:04.454000 audit[3264]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3264 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:04.454000 audit[3264]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffb7e6960 a2=0 a3=0 items=0 ppid=3232 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:04.454000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:15:04.457778 kubelet[3232]: E1216 12:15:04.457097 3232 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:15:04.467804 kubelet[3232]: I1216 12:15:04.467786 3232 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:15:04.467930 kubelet[3232]: I1216 12:15:04.467919 3232 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:15:04.467982 kubelet[3232]: I1216 12:15:04.467975 3232 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:15:04.535921 kubelet[3232]: E1216 12:15:04.535692 3232 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-623de6ebc0\" not found" Dec 16 12:15:04.545991 kubelet[3232]: E1216 12:15:04.545949 3232 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 12:15:04.588403 kubelet[3232]: I1216 12:15:04.588044 3232 policy_none.go:49] "None policy: Start" Dec 16 12:15:04.588403 kubelet[3232]: I1216 12:15:04.588078 3232 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 12:15:04.588403 kubelet[3232]: I1216 12:15:04.588092 3232 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 12:15:04.636213 kubelet[3232]: E1216 12:15:04.636171 3232 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-623de6ebc0\" not found" Dec 16 12:15:04.648905 kubelet[3232]: E1216 12:15:04.648869 3232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-a-623de6ebc0?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="400ms" Dec 16 12:15:04.652468 kubelet[3232]: I1216 12:15:04.652438 3232 policy_none.go:47] "Start" Dec 16 12:15:04.657441 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:15:04.668244 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:15:04.672171 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:15:04.684787 kubelet[3232]: E1216 12:15:04.684457 3232 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:15:04.684787 kubelet[3232]: I1216 12:15:04.684654 3232 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:15:04.684787 kubelet[3232]: I1216 12:15:04.684665 3232 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:15:04.685537 kubelet[3232]: I1216 12:15:04.685435 3232 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:15:04.689131 kubelet[3232]: E1216 12:15:04.689111 3232 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:15:04.689189 kubelet[3232]: E1216 12:15:04.689156 3232 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547.0.0-a-623de6ebc0\" not found" Dec 16 12:15:04.704011 kubelet[3232]: E1216 12:15:04.703906 3232 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.36:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547.0.0-a-623de6ebc0.1881b125b5b3c2b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.0.0-a-623de6ebc0,UID:ci-4547.0.0-a-623de6ebc0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.0.0-a-623de6ebc0,},FirstTimestamp:2025-12-16 12:15:04.426308278 +0000 UTC m=+0.551462234,LastTimestamp:2025-12-16 12:15:04.426308278 +0000 UTC m=+0.551462234,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.0.0-a-623de6ebc0,}" Dec 16 12:15:04.787042 kubelet[3232]: I1216 12:15:04.786767 3232 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:04.787288 kubelet[3232]: E1216 12:15:04.787253 3232 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:04.839803 kubelet[3232]: I1216 12:15:04.839665 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e97c8fe2fabc628ab5c0d0c5aa25a616-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-a-623de6ebc0\" (UID: \"e97c8fe2fabc628ab5c0d0c5aa25a616\") " pod="kube-system/kube-scheduler-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:04.988995 kubelet[3232]: I1216 12:15:04.988963 3232 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:04.989511 kubelet[3232]: E1216 12:15:04.989326 3232 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:05.050286 kubelet[3232]: E1216 12:15:05.050242 3232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-a-623de6ebc0?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="800ms" Dec 16 12:15:05.391541 kubelet[3232]: I1216 12:15:05.391506 3232 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:05.391872 kubelet[3232]: E1216 12:15:05.391847 3232 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:05.459633 kubelet[3232]: E1216 12:15:05.459599 3232 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:15:05.485320 kubelet[3232]: E1216 12:15:05.483952 3232 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:15:05.492833 systemd[1]: Created slice kubepods-burstable-pode97c8fe2fabc628ab5c0d0c5aa25a616.slice - libcontainer container kubepods-burstable-pode97c8fe2fabc628ab5c0d0c5aa25a616.slice. Dec 16 12:15:05.502896 kubelet[3232]: E1216 12:15:05.502488 3232 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-623de6ebc0\" not found" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:05.507430 systemd[1]: Created slice kubepods-burstable-poda1bdff413b780b9a7675f4afc6d64004.slice - libcontainer container kubepods-burstable-poda1bdff413b780b9a7675f4afc6d64004.slice. Dec 16 12:15:05.510095 containerd[2077]: time="2025-12-16T12:15:05.510057843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-a-623de6ebc0,Uid:e97c8fe2fabc628ab5c0d0c5aa25a616,Namespace:kube-system,Attempt:0,}" Dec 16 12:15:05.512713 kubelet[3232]: E1216 12:15:05.512681 3232 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-623de6ebc0\" not found" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:05.516215 systemd[1]: Created slice kubepods-burstable-pod04568fe24d62bb1587e2baf926db6b90.slice - libcontainer container kubepods-burstable-pod04568fe24d62bb1587e2baf926db6b90.slice. Dec 16 12:15:05.517925 kubelet[3232]: E1216 12:15:05.517906 3232 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-623de6ebc0\" not found" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:05.543182 kubelet[3232]: I1216 12:15:05.543139 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1bdff413b780b9a7675f4afc6d64004-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-a-623de6ebc0\" (UID: \"a1bdff413b780b9a7675f4afc6d64004\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:05.543182 kubelet[3232]: I1216 12:15:05.543179 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1bdff413b780b9a7675f4afc6d64004-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-a-623de6ebc0\" (UID: \"a1bdff413b780b9a7675f4afc6d64004\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:05.543182 kubelet[3232]: I1216 12:15:05.543196 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04568fe24d62bb1587e2baf926db6b90-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-623de6ebc0\" (UID: \"04568fe24d62bb1587e2baf926db6b90\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:05.543348 kubelet[3232]: I1216 12:15:05.543209 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04568fe24d62bb1587e2baf926db6b90-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-623de6ebc0\" (UID: \"04568fe24d62bb1587e2baf926db6b90\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:05.543348 kubelet[3232]: I1216 12:15:05.543220 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04568fe24d62bb1587e2baf926db6b90-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-a-623de6ebc0\" (UID: \"04568fe24d62bb1587e2baf926db6b90\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:05.543348 kubelet[3232]: I1216 12:15:05.543229 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04568fe24d62bb1587e2baf926db6b90-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-a-623de6ebc0\" (UID: \"04568fe24d62bb1587e2baf926db6b90\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:05.543348 kubelet[3232]: I1216 12:15:05.543241 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1bdff413b780b9a7675f4afc6d64004-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-a-623de6ebc0\" (UID: \"a1bdff413b780b9a7675f4afc6d64004\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:05.543348 kubelet[3232]: I1216 12:15:05.543252 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/04568fe24d62bb1587e2baf926db6b90-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-a-623de6ebc0\" (UID: \"04568fe24d62bb1587e2baf926db6b90\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:05.740730 kubelet[3232]: E1216 12:15:05.740607 3232 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:15:05.821662 containerd[2077]: time="2025-12-16T12:15:05.821605032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-a-623de6ebc0,Uid:a1bdff413b780b9a7675f4afc6d64004,Namespace:kube-system,Attempt:0,}" Dec 16 12:15:05.827375 containerd[2077]: time="2025-12-16T12:15:05.827343791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-a-623de6ebc0,Uid:04568fe24d62bb1587e2baf926db6b90,Namespace:kube-system,Attempt:0,}" Dec 16 12:15:05.851462 kubelet[3232]: E1216 12:15:05.851420 3232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-a-623de6ebc0?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="1.6s" Dec 16 12:15:05.970898 kubelet[3232]: E1216 12:15:05.970849 3232 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.0.0-a-623de6ebc0&limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:15:06.182325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2293431724.mount: Deactivated successfully. Dec 16 12:15:06.194520 kubelet[3232]: I1216 12:15:06.194465 3232 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:06.196268 kubelet[3232]: E1216 12:15:06.196239 3232 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:06.205598 containerd[2077]: time="2025-12-16T12:15:06.205105367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:15:06.215546 containerd[2077]: time="2025-12-16T12:15:06.215484896Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:15:06.222075 containerd[2077]: time="2025-12-16T12:15:06.222026398Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:15:06.227729 containerd[2077]: time="2025-12-16T12:15:06.227676212Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:15:06.231798 containerd[2077]: time="2025-12-16T12:15:06.231426894Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:15:06.234089 containerd[2077]: time="2025-12-16T12:15:06.234043920Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:15:06.236919 containerd[2077]: time="2025-12-16T12:15:06.236884753Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:15:06.241242 containerd[2077]: time="2025-12-16T12:15:06.241208852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:15:06.241667 containerd[2077]: time="2025-12-16T12:15:06.241639366Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 724.346393ms" Dec 16 12:15:06.246308 containerd[2077]: time="2025-12-16T12:15:06.246075517Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 397.843087ms" Dec 16 12:15:06.249318 containerd[2077]: time="2025-12-16T12:15:06.249289818Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 419.233935ms" Dec 16 12:15:06.319624 containerd[2077]: time="2025-12-16T12:15:06.319581271Z" level=info msg="connecting to shim 1c32212fdd3f5dab811b0b03dc645d207d1159e5d9d6c04d53c2b68a3426c994" address="unix:///run/containerd/s/b00334f574d22c01b686127c091e8a96f68f7fcd84e50fecd224138b0c3966ce" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:06.320923 containerd[2077]: time="2025-12-16T12:15:06.320895761Z" level=info msg="connecting to shim 8b87b9167a9faddc71493fbf83bce660ba94f667ca5fea5527c36ea303164444" address="unix:///run/containerd/s/79be3269d2f70a35385a5e5a9cf4331c237039c812cb1e3569b36a1583f5a1be" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:06.341036 containerd[2077]: time="2025-12-16T12:15:06.340998410Z" level=info msg="connecting to shim 0d1dacf28bb05e77ef950436ad06117fe596761fd02cf46f10f27a76f984ab73" address="unix:///run/containerd/s/68c25e631a895ff9b274fd6eddc113a3d4d4825383b9e9f8881c19a96936fa42" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:06.341961 systemd[1]: Started cri-containerd-8b87b9167a9faddc71493fbf83bce660ba94f667ca5fea5527c36ea303164444.scope - libcontainer container 8b87b9167a9faddc71493fbf83bce660ba94f667ca5fea5527c36ea303164444. Dec 16 12:15:06.348663 systemd[1]: Started cri-containerd-1c32212fdd3f5dab811b0b03dc645d207d1159e5d9d6c04d53c2b68a3426c994.scope - libcontainer container 1c32212fdd3f5dab811b0b03dc645d207d1159e5d9d6c04d53c2b68a3426c994. Dec 16 12:15:06.358000 audit: BPF prog-id=107 op=LOAD Dec 16 12:15:06.359000 audit: BPF prog-id=108 op=LOAD Dec 16 12:15:06.359000 audit[3312]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3289 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862383762393136376139666164646337313439336662663833626365 Dec 16 12:15:06.359000 audit: BPF prog-id=108 op=UNLOAD Dec 16 12:15:06.359000 audit[3312]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3289 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862383762393136376139666164646337313439336662663833626365 Dec 16 12:15:06.359000 audit: BPF prog-id=109 op=LOAD Dec 16 12:15:06.359000 audit[3312]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3289 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862383762393136376139666164646337313439336662663833626365 Dec 16 12:15:06.359000 audit: BPF prog-id=110 op=LOAD Dec 16 12:15:06.359000 audit[3312]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3289 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862383762393136376139666164646337313439336662663833626365 Dec 16 12:15:06.359000 audit: BPF prog-id=110 op=UNLOAD Dec 16 12:15:06.359000 audit[3312]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3289 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862383762393136376139666164646337313439336662663833626365 Dec 16 12:15:06.359000 audit: BPF prog-id=109 op=UNLOAD Dec 16 12:15:06.359000 audit[3312]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3289 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862383762393136376139666164646337313439336662663833626365 Dec 16 12:15:06.359000 audit: BPF prog-id=111 op=LOAD Dec 16 12:15:06.359000 audit[3312]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3289 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862383762393136376139666164646337313439336662663833626365 Dec 16 12:15:06.364000 audit: BPF prog-id=112 op=LOAD Dec 16 12:15:06.365000 audit: BPF prog-id=113 op=LOAD Dec 16 12:15:06.365000 audit[3313]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3284 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163333232313266646433663564616238313162306230336463363435 Dec 16 12:15:06.366000 audit: BPF prog-id=113 op=UNLOAD Dec 16 12:15:06.366000 audit[3313]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3284 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163333232313266646433663564616238313162306230336463363435 Dec 16 12:15:06.366000 audit: BPF prog-id=114 op=LOAD Dec 16 12:15:06.366000 audit[3313]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3284 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163333232313266646433663564616238313162306230336463363435 Dec 16 12:15:06.366000 audit: BPF prog-id=115 op=LOAD Dec 16 12:15:06.366000 audit[3313]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3284 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163333232313266646433663564616238313162306230336463363435 Dec 16 12:15:06.367000 audit: BPF prog-id=115 op=UNLOAD Dec 16 12:15:06.367000 audit[3313]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3284 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163333232313266646433663564616238313162306230336463363435 Dec 16 12:15:06.367000 audit: BPF prog-id=114 op=UNLOAD Dec 16 12:15:06.367000 audit[3313]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3284 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163333232313266646433663564616238313162306230336463363435 Dec 16 12:15:06.367000 audit: BPF prog-id=116 op=LOAD Dec 16 12:15:06.367000 audit[3313]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3284 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163333232313266646433663564616238313162306230336463363435 Dec 16 12:15:06.371948 systemd[1]: Started cri-containerd-0d1dacf28bb05e77ef950436ad06117fe596761fd02cf46f10f27a76f984ab73.scope - libcontainer container 0d1dacf28bb05e77ef950436ad06117fe596761fd02cf46f10f27a76f984ab73. Dec 16 12:15:06.386000 audit: BPF prog-id=117 op=LOAD Dec 16 12:15:06.387000 audit: BPF prog-id=118 op=LOAD Dec 16 12:15:06.387000 audit[3354]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3338 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064316461636632386262303565373765663935303433366164303631 Dec 16 12:15:06.387000 audit: BPF prog-id=118 op=UNLOAD Dec 16 12:15:06.387000 audit[3354]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3338 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064316461636632386262303565373765663935303433366164303631 Dec 16 12:15:06.387000 audit: BPF prog-id=119 op=LOAD Dec 16 12:15:06.387000 audit[3354]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3338 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064316461636632386262303565373765663935303433366164303631 Dec 16 12:15:06.388000 audit: BPF prog-id=120 op=LOAD Dec 16 12:15:06.388000 audit[3354]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3338 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064316461636632386262303565373765663935303433366164303631 Dec 16 12:15:06.389000 audit: BPF prog-id=120 op=UNLOAD Dec 16 12:15:06.389000 audit[3354]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3338 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.389000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064316461636632386262303565373765663935303433366164303631 Dec 16 12:15:06.389000 audit: BPF prog-id=119 op=UNLOAD Dec 16 12:15:06.389000 audit[3354]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3338 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.389000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064316461636632386262303565373765663935303433366164303631 Dec 16 12:15:06.391336 containerd[2077]: time="2025-12-16T12:15:06.391302442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-a-623de6ebc0,Uid:a1bdff413b780b9a7675f4afc6d64004,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b87b9167a9faddc71493fbf83bce660ba94f667ca5fea5527c36ea303164444\"" Dec 16 12:15:06.389000 audit: BPF prog-id=121 op=LOAD Dec 16 12:15:06.389000 audit[3354]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3338 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.389000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064316461636632386262303565373765663935303433366164303631 Dec 16 12:15:06.401622 containerd[2077]: time="2025-12-16T12:15:06.401493240Z" level=info msg="CreateContainer within sandbox \"8b87b9167a9faddc71493fbf83bce660ba94f667ca5fea5527c36ea303164444\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:15:06.407457 kubelet[3232]: E1216 12:15:06.407428 3232 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:15:06.423113 containerd[2077]: time="2025-12-16T12:15:06.422885064Z" level=info msg="Container cf4a473c6dcc75a8e23d84d592b82a41e12da5047f8568c2cdb920f1f0e1de46: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:15:06.425848 containerd[2077]: time="2025-12-16T12:15:06.425811609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-a-623de6ebc0,Uid:e97c8fe2fabc628ab5c0d0c5aa25a616,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c32212fdd3f5dab811b0b03dc645d207d1159e5d9d6c04d53c2b68a3426c994\"" Dec 16 12:15:06.429432 containerd[2077]: time="2025-12-16T12:15:06.429253573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-a-623de6ebc0,Uid:04568fe24d62bb1587e2baf926db6b90,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d1dacf28bb05e77ef950436ad06117fe596761fd02cf46f10f27a76f984ab73\"" Dec 16 12:15:06.434767 containerd[2077]: time="2025-12-16T12:15:06.434657075Z" level=info msg="CreateContainer within sandbox \"1c32212fdd3f5dab811b0b03dc645d207d1159e5d9d6c04d53c2b68a3426c994\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:15:06.439309 containerd[2077]: time="2025-12-16T12:15:06.439269938Z" level=info msg="CreateContainer within sandbox \"0d1dacf28bb05e77ef950436ad06117fe596761fd02cf46f10f27a76f984ab73\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:15:06.448798 containerd[2077]: time="2025-12-16T12:15:06.448737889Z" level=info msg="CreateContainer within sandbox \"8b87b9167a9faddc71493fbf83bce660ba94f667ca5fea5527c36ea303164444\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cf4a473c6dcc75a8e23d84d592b82a41e12da5047f8568c2cdb920f1f0e1de46\"" Dec 16 12:15:06.449362 containerd[2077]: time="2025-12-16T12:15:06.449331372Z" level=info msg="StartContainer for \"cf4a473c6dcc75a8e23d84d592b82a41e12da5047f8568c2cdb920f1f0e1de46\"" Dec 16 12:15:06.450257 containerd[2077]: time="2025-12-16T12:15:06.450228677Z" level=info msg="connecting to shim cf4a473c6dcc75a8e23d84d592b82a41e12da5047f8568c2cdb920f1f0e1de46" address="unix:///run/containerd/s/79be3269d2f70a35385a5e5a9cf4331c237039c812cb1e3569b36a1583f5a1be" protocol=ttrpc version=3 Dec 16 12:15:06.467075 systemd[1]: Started cri-containerd-cf4a473c6dcc75a8e23d84d592b82a41e12da5047f8568c2cdb920f1f0e1de46.scope - libcontainer container cf4a473c6dcc75a8e23d84d592b82a41e12da5047f8568c2cdb920f1f0e1de46. Dec 16 12:15:06.474707 containerd[2077]: time="2025-12-16T12:15:06.474636559Z" level=info msg="Container 67d78c569e5ee687c06a34bc71f4adb06bf9bacb5edf0dc668ec37ef776ad4af: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:15:06.481000 audit: BPF prog-id=122 op=LOAD Dec 16 12:15:06.481000 audit: BPF prog-id=123 op=LOAD Dec 16 12:15:06.481000 audit[3407]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000174180 a2=98 a3=0 items=0 ppid=3289 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366346134373363366463633735613865323364383464353932623832 Dec 16 12:15:06.481000 audit: BPF prog-id=123 op=UNLOAD Dec 16 12:15:06.481000 audit[3407]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3289 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366346134373363366463633735613865323364383464353932623832 Dec 16 12:15:06.482000 audit: BPF prog-id=124 op=LOAD Dec 16 12:15:06.482000 audit[3407]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001743e8 a2=98 a3=0 items=0 ppid=3289 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366346134373363366463633735613865323364383464353932623832 Dec 16 12:15:06.482000 audit: BPF prog-id=125 op=LOAD Dec 16 12:15:06.482000 audit[3407]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000174168 a2=98 a3=0 items=0 ppid=3289 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366346134373363366463633735613865323364383464353932623832 Dec 16 12:15:06.482000 audit: BPF prog-id=125 op=UNLOAD Dec 16 12:15:06.482000 audit[3407]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3289 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366346134373363366463633735613865323364383464353932623832 Dec 16 12:15:06.484717 containerd[2077]: time="2025-12-16T12:15:06.484687984Z" level=info msg="Container 3dab7152e8e2f8a544cf0e3793db1f7a9063f82b690782689a0afaeebedc741f: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:15:06.483000 audit: BPF prog-id=124 op=UNLOAD Dec 16 12:15:06.483000 audit[3407]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3289 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366346134373363366463633735613865323364383464353932623832 Dec 16 12:15:06.483000 audit: BPF prog-id=126 op=LOAD Dec 16 12:15:06.483000 audit[3407]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000174648 a2=98 a3=0 items=0 ppid=3289 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366346134373363366463633735613865323364383464353932623832 Dec 16 12:15:06.503439 containerd[2077]: time="2025-12-16T12:15:06.503398599Z" level=info msg="CreateContainer within sandbox \"0d1dacf28bb05e77ef950436ad06117fe596761fd02cf46f10f27a76f984ab73\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"67d78c569e5ee687c06a34bc71f4adb06bf9bacb5edf0dc668ec37ef776ad4af\"" Dec 16 12:15:06.504383 containerd[2077]: time="2025-12-16T12:15:06.504361534Z" level=info msg="StartContainer for \"67d78c569e5ee687c06a34bc71f4adb06bf9bacb5edf0dc668ec37ef776ad4af\"" Dec 16 12:15:06.505510 containerd[2077]: time="2025-12-16T12:15:06.505484949Z" level=info msg="connecting to shim 67d78c569e5ee687c06a34bc71f4adb06bf9bacb5edf0dc668ec37ef776ad4af" address="unix:///run/containerd/s/68c25e631a895ff9b274fd6eddc113a3d4d4825383b9e9f8881c19a96936fa42" protocol=ttrpc version=3 Dec 16 12:15:06.518702 containerd[2077]: time="2025-12-16T12:15:06.518551672Z" level=info msg="CreateContainer within sandbox \"1c32212fdd3f5dab811b0b03dc645d207d1159e5d9d6c04d53c2b68a3426c994\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3dab7152e8e2f8a544cf0e3793db1f7a9063f82b690782689a0afaeebedc741f\"" Dec 16 12:15:06.522873 containerd[2077]: time="2025-12-16T12:15:06.522845728Z" level=info msg="StartContainer for \"3dab7152e8e2f8a544cf0e3793db1f7a9063f82b690782689a0afaeebedc741f\"" Dec 16 12:15:06.525491 containerd[2077]: time="2025-12-16T12:15:06.524177467Z" level=info msg="StartContainer for \"cf4a473c6dcc75a8e23d84d592b82a41e12da5047f8568c2cdb920f1f0e1de46\" returns successfully" Dec 16 12:15:06.525491 containerd[2077]: time="2025-12-16T12:15:06.524738050Z" level=info msg="connecting to shim 3dab7152e8e2f8a544cf0e3793db1f7a9063f82b690782689a0afaeebedc741f" address="unix:///run/containerd/s/b00334f574d22c01b686127c091e8a96f68f7fcd84e50fecd224138b0c3966ce" protocol=ttrpc version=3 Dec 16 12:15:06.536035 systemd[1]: Started cri-containerd-67d78c569e5ee687c06a34bc71f4adb06bf9bacb5edf0dc668ec37ef776ad4af.scope - libcontainer container 67d78c569e5ee687c06a34bc71f4adb06bf9bacb5edf0dc668ec37ef776ad4af. Dec 16 12:15:06.546000 audit: BPF prog-id=127 op=LOAD Dec 16 12:15:06.548000 audit: BPF prog-id=128 op=LOAD Dec 16 12:15:06.548000 audit[3440]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3338 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637643738633536396535656536383763303661333462633731663461 Dec 16 12:15:06.548000 audit: BPF prog-id=128 op=UNLOAD Dec 16 12:15:06.548000 audit[3440]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3338 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637643738633536396535656536383763303661333462633731663461 Dec 16 12:15:06.548000 audit: BPF prog-id=129 op=LOAD Dec 16 12:15:06.548000 audit[3440]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3338 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637643738633536396535656536383763303661333462633731663461 Dec 16 12:15:06.548000 audit: BPF prog-id=130 op=LOAD Dec 16 12:15:06.548000 audit[3440]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3338 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637643738633536396535656536383763303661333462633731663461 Dec 16 12:15:06.548000 audit: BPF prog-id=130 op=UNLOAD Dec 16 12:15:06.548000 audit[3440]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3338 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637643738633536396535656536383763303661333462633731663461 Dec 16 12:15:06.548000 audit: BPF prog-id=129 op=UNLOAD Dec 16 12:15:06.548000 audit[3440]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3338 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637643738633536396535656536383763303661333462633731663461 Dec 16 12:15:06.548000 audit: BPF prog-id=131 op=LOAD Dec 16 12:15:06.548000 audit[3440]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3338 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637643738633536396535656536383763303661333462633731663461 Dec 16 12:15:06.559509 systemd[1]: Started cri-containerd-3dab7152e8e2f8a544cf0e3793db1f7a9063f82b690782689a0afaeebedc741f.scope - libcontainer container 3dab7152e8e2f8a544cf0e3793db1f7a9063f82b690782689a0afaeebedc741f. Dec 16 12:15:06.572000 audit: BPF prog-id=132 op=LOAD Dec 16 12:15:06.574000 audit: BPF prog-id=133 op=LOAD Dec 16 12:15:06.574000 audit[3452]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3284 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364616237313532653865326638613534346366306533373933646231 Dec 16 12:15:06.574000 audit: BPF prog-id=133 op=UNLOAD Dec 16 12:15:06.574000 audit[3452]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3284 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364616237313532653865326638613534346366306533373933646231 Dec 16 12:15:06.574000 audit: BPF prog-id=134 op=LOAD Dec 16 12:15:06.574000 audit[3452]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3284 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364616237313532653865326638613534346366306533373933646231 Dec 16 12:15:06.574000 audit: BPF prog-id=135 op=LOAD Dec 16 12:15:06.574000 audit[3452]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3284 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364616237313532653865326638613534346366306533373933646231 Dec 16 12:15:06.574000 audit: BPF prog-id=135 op=UNLOAD Dec 16 12:15:06.574000 audit[3452]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3284 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364616237313532653865326638613534346366306533373933646231 Dec 16 12:15:06.574000 audit: BPF prog-id=134 op=UNLOAD Dec 16 12:15:06.574000 audit[3452]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3284 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364616237313532653865326638613534346366306533373933646231 Dec 16 12:15:06.574000 audit: BPF prog-id=136 op=LOAD Dec 16 12:15:06.574000 audit[3452]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3284 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:06.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364616237313532653865326638613534346366306533373933646231 Dec 16 12:15:06.593473 containerd[2077]: time="2025-12-16T12:15:06.593428146Z" level=info msg="StartContainer for \"67d78c569e5ee687c06a34bc71f4adb06bf9bacb5edf0dc668ec37ef776ad4af\" returns successfully" Dec 16 12:15:06.619518 containerd[2077]: time="2025-12-16T12:15:06.619469309Z" level=info msg="StartContainer for \"3dab7152e8e2f8a544cf0e3793db1f7a9063f82b690782689a0afaeebedc741f\" returns successfully" Dec 16 12:15:07.483149 kubelet[3232]: E1216 12:15:07.483116 3232 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-623de6ebc0\" not found" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:07.490118 kubelet[3232]: E1216 12:15:07.490075 3232 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-623de6ebc0\" not found" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:07.491077 kubelet[3232]: E1216 12:15:07.491050 3232 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-623de6ebc0\" not found" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:07.704208 kubelet[3232]: E1216 12:15:07.704156 3232 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547.0.0-a-623de6ebc0\" not found" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:07.798493 kubelet[3232]: I1216 12:15:07.798449 3232 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:07.877476 kubelet[3232]: I1216 12:15:07.877438 3232 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:07.877476 kubelet[3232]: E1216 12:15:07.877474 3232 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4547.0.0-a-623de6ebc0\": node \"ci-4547.0.0-a-623de6ebc0\" not found" Dec 16 12:15:07.945067 kubelet[3232]: E1216 12:15:07.945025 3232 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-623de6ebc0\" not found" Dec 16 12:15:08.045471 kubelet[3232]: E1216 12:15:08.045428 3232 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-623de6ebc0\" not found" Dec 16 12:15:08.146464 kubelet[3232]: E1216 12:15:08.146048 3232 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-623de6ebc0\" not found" Dec 16 12:15:08.246684 kubelet[3232]: E1216 12:15:08.246640 3232 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-623de6ebc0\" not found" Dec 16 12:15:08.347412 kubelet[3232]: E1216 12:15:08.347366 3232 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-623de6ebc0\" not found" Dec 16 12:15:08.447929 kubelet[3232]: E1216 12:15:08.447653 3232 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-623de6ebc0\" not found" Dec 16 12:15:08.493383 kubelet[3232]: E1216 12:15:08.492982 3232 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-623de6ebc0\" not found" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:08.493383 kubelet[3232]: E1216 12:15:08.493279 3232 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-623de6ebc0\" not found" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:08.494393 kubelet[3232]: E1216 12:15:08.494287 3232 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-623de6ebc0\" not found" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:08.547994 kubelet[3232]: E1216 12:15:08.547948 3232 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-623de6ebc0\" not found" Dec 16 12:15:08.636068 kubelet[3232]: I1216 12:15:08.635812 3232 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:08.664525 kubelet[3232]: I1216 12:15:08.664188 3232 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:15:08.664525 kubelet[3232]: I1216 12:15:08.664313 3232 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:08.768186 kubelet[3232]: I1216 12:15:08.767875 3232 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:15:08.768186 kubelet[3232]: I1216 12:15:08.767980 3232 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:08.816702 kubelet[3232]: I1216 12:15:08.816537 3232 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:15:09.424499 kubelet[3232]: I1216 12:15:09.424448 3232 apiserver.go:52] "Watching apiserver" Dec 16 12:15:09.437951 kubelet[3232]: I1216 12:15:09.437895 3232 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 12:15:09.493092 kubelet[3232]: I1216 12:15:09.492772 3232 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:09.493092 kubelet[3232]: I1216 12:15:09.492846 3232 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:09.493295 kubelet[3232]: I1216 12:15:09.493275 3232 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:09.562364 kubelet[3232]: I1216 12:15:09.562322 3232 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:15:09.563034 kubelet[3232]: I1216 12:15:09.562828 3232 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:15:09.563034 kubelet[3232]: E1216 12:15:09.562853 3232 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-a-623de6ebc0\" already exists" pod="kube-system/kube-scheduler-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:09.563034 kubelet[3232]: E1216 12:15:09.562978 3232 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.0.0-a-623de6ebc0\" already exists" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:09.563608 kubelet[3232]: I1216 12:15:09.563464 3232 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:15:09.563608 kubelet[3232]: E1216 12:15:09.563494 3232 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-a-623de6ebc0\" already exists" pod="kube-system/kube-apiserver-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:10.938218 systemd[1]: Reload requested from client PID 3515 ('systemctl') (unit session-10.scope)... Dec 16 12:15:10.938234 systemd[1]: Reloading... Dec 16 12:15:11.007829 zram_generator::config[3565]: No configuration found. Dec 16 12:15:11.182694 systemd[1]: Reloading finished in 244 ms. Dec 16 12:15:11.201285 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:15:11.219188 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:15:11.219631 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:15:11.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:11.219948 systemd[1]: kubelet.service: Consumed 650ms CPU time, 121.3M memory peak. Dec 16 12:15:11.223790 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 16 12:15:11.223897 kernel: audit: type=1131 audit(1765887311.219:425): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:11.225080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:15:11.236000 audit: BPF prog-id=137 op=LOAD Dec 16 12:15:11.241819 kernel: audit: type=1334 audit(1765887311.236:426): prog-id=137 op=LOAD Dec 16 12:15:11.249259 kernel: audit: type=1334 audit(1765887311.242:427): prog-id=138 op=LOAD Dec 16 12:15:11.242000 audit: BPF prog-id=138 op=LOAD Dec 16 12:15:11.242000 audit: BPF prog-id=96 op=UNLOAD Dec 16 12:15:11.253803 kernel: audit: type=1334 audit(1765887311.242:428): prog-id=96 op=UNLOAD Dec 16 12:15:11.242000 audit: BPF prog-id=97 op=UNLOAD Dec 16 12:15:11.258253 kernel: audit: type=1334 audit(1765887311.242:429): prog-id=97 op=UNLOAD Dec 16 12:15:11.243000 audit: BPF prog-id=139 op=LOAD Dec 16 12:15:11.263062 kernel: audit: type=1334 audit(1765887311.243:430): prog-id=139 op=LOAD Dec 16 12:15:11.243000 audit: BPF prog-id=101 op=UNLOAD Dec 16 12:15:11.267504 kernel: audit: type=1334 audit(1765887311.243:431): prog-id=101 op=UNLOAD Dec 16 12:15:11.244000 audit: BPF prog-id=140 op=LOAD Dec 16 12:15:11.272805 kernel: audit: type=1334 audit(1765887311.244:432): prog-id=140 op=LOAD Dec 16 12:15:11.248000 audit: BPF prog-id=141 op=LOAD Dec 16 12:15:11.277533 kernel: audit: type=1334 audit(1765887311.248:433): prog-id=141 op=LOAD Dec 16 12:15:11.248000 audit: BPF prog-id=102 op=UNLOAD Dec 16 12:15:11.282128 kernel: audit: type=1334 audit(1765887311.248:434): prog-id=102 op=UNLOAD Dec 16 12:15:11.248000 audit: BPF prog-id=103 op=UNLOAD Dec 16 12:15:11.248000 audit: BPF prog-id=142 op=LOAD Dec 16 12:15:11.248000 audit: BPF prog-id=95 op=UNLOAD Dec 16 12:15:11.252000 audit: BPF prog-id=143 op=LOAD Dec 16 12:15:11.253000 audit: BPF prog-id=87 op=UNLOAD Dec 16 12:15:11.253000 audit: BPF prog-id=144 op=LOAD Dec 16 12:15:11.257000 audit: BPF prog-id=145 op=LOAD Dec 16 12:15:11.257000 audit: BPF prog-id=88 op=UNLOAD Dec 16 12:15:11.257000 audit: BPF prog-id=89 op=UNLOAD Dec 16 12:15:11.262000 audit: BPF prog-id=146 op=LOAD Dec 16 12:15:11.262000 audit: BPF prog-id=93 op=UNLOAD Dec 16 12:15:11.266000 audit: BPF prog-id=147 op=LOAD Dec 16 12:15:11.266000 audit: BPF prog-id=104 op=UNLOAD Dec 16 12:15:11.271000 audit: BPF prog-id=148 op=LOAD Dec 16 12:15:11.271000 audit: BPF prog-id=149 op=LOAD Dec 16 12:15:11.271000 audit: BPF prog-id=105 op=UNLOAD Dec 16 12:15:11.271000 audit: BPF prog-id=106 op=UNLOAD Dec 16 12:15:11.276000 audit: BPF prog-id=150 op=LOAD Dec 16 12:15:11.276000 audit: BPF prog-id=94 op=UNLOAD Dec 16 12:15:11.281000 audit: BPF prog-id=151 op=LOAD Dec 16 12:15:11.281000 audit: BPF prog-id=90 op=UNLOAD Dec 16 12:15:11.281000 audit: BPF prog-id=152 op=LOAD Dec 16 12:15:11.281000 audit: BPF prog-id=153 op=LOAD Dec 16 12:15:11.281000 audit: BPF prog-id=91 op=UNLOAD Dec 16 12:15:11.281000 audit: BPF prog-id=92 op=UNLOAD Dec 16 12:15:11.282000 audit: BPF prog-id=154 op=LOAD Dec 16 12:15:11.282000 audit: BPF prog-id=98 op=UNLOAD Dec 16 12:15:11.282000 audit: BPF prog-id=155 op=LOAD Dec 16 12:15:11.282000 audit: BPF prog-id=156 op=LOAD Dec 16 12:15:11.282000 audit: BPF prog-id=99 op=UNLOAD Dec 16 12:15:11.282000 audit: BPF prog-id=100 op=UNLOAD Dec 16 12:15:11.382741 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:15:11.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:11.394038 (kubelet)[3629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:15:11.425324 kubelet[3629]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:15:11.425661 kubelet[3629]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:15:11.425827 kubelet[3629]: I1216 12:15:11.425796 3629 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:15:11.430421 kubelet[3629]: I1216 12:15:11.430390 3629 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 12:15:11.430421 kubelet[3629]: I1216 12:15:11.430414 3629 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:15:11.430515 kubelet[3629]: I1216 12:15:11.430436 3629 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 12:15:11.430515 kubelet[3629]: I1216 12:15:11.430440 3629 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:15:11.432571 kubelet[3629]: I1216 12:15:11.432376 3629 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:15:11.433554 kubelet[3629]: I1216 12:15:11.433535 3629 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:15:11.435103 kubelet[3629]: I1216 12:15:11.435077 3629 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:15:11.438251 kubelet[3629]: I1216 12:15:11.438233 3629 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:15:11.440555 kubelet[3629]: I1216 12:15:11.440533 3629 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 12:15:11.440731 kubelet[3629]: I1216 12:15:11.440684 3629 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:15:11.440852 kubelet[3629]: I1216 12:15:11.440707 3629 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-a-623de6ebc0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:15:11.440852 kubelet[3629]: I1216 12:15:11.440852 3629 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:15:11.441234 kubelet[3629]: I1216 12:15:11.440860 3629 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 12:15:11.441234 kubelet[3629]: I1216 12:15:11.440879 3629 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 12:15:11.441451 kubelet[3629]: I1216 12:15:11.441425 3629 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:15:11.441584 kubelet[3629]: I1216 12:15:11.441571 3629 kubelet.go:475] "Attempting to sync node with API server" Dec 16 12:15:11.441615 kubelet[3629]: I1216 12:15:11.441588 3629 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:15:11.441615 kubelet[3629]: I1216 12:15:11.441606 3629 kubelet.go:387] "Adding apiserver pod source" Dec 16 12:15:11.441615 kubelet[3629]: I1216 12:15:11.441614 3629 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:15:11.444673 kubelet[3629]: I1216 12:15:11.444363 3629 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:15:11.445204 kubelet[3629]: I1216 12:15:11.445182 3629 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:15:11.445248 kubelet[3629]: I1216 12:15:11.445209 3629 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 12:15:11.456877 kubelet[3629]: I1216 12:15:11.456790 3629 server.go:1262] "Started kubelet" Dec 16 12:15:11.459559 kubelet[3629]: I1216 12:15:11.459477 3629 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:15:11.461773 kubelet[3629]: E1216 12:15:11.461683 3629 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:15:11.463530 kubelet[3629]: I1216 12:15:11.462148 3629 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:15:11.463530 kubelet[3629]: I1216 12:15:11.462706 3629 server.go:310] "Adding debug handlers to kubelet server" Dec 16 12:15:11.466768 kubelet[3629]: I1216 12:15:11.466462 3629 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 12:15:11.467032 kubelet[3629]: I1216 12:15:11.466459 3629 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:15:11.467148 kubelet[3629]: I1216 12:15:11.467121 3629 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 12:15:11.467378 kubelet[3629]: I1216 12:15:11.467341 3629 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:15:11.467927 kubelet[3629]: I1216 12:15:11.467885 3629 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:15:11.468662 kubelet[3629]: I1216 12:15:11.468634 3629 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 12:15:11.468771 kubelet[3629]: I1216 12:15:11.468749 3629 reconciler.go:29] "Reconciler: start to sync state" Dec 16 12:15:11.470742 kubelet[3629]: I1216 12:15:11.470678 3629 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:15:11.470983 kubelet[3629]: I1216 12:15:11.470961 3629 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:15:11.471896 kubelet[3629]: I1216 12:15:11.471863 3629 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 12:15:11.474678 kubelet[3629]: I1216 12:15:11.474663 3629 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:15:11.475858 kubelet[3629]: I1216 12:15:11.475833 3629 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 12:15:11.475858 kubelet[3629]: I1216 12:15:11.475854 3629 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 12:15:11.475941 kubelet[3629]: I1216 12:15:11.475872 3629 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 12:15:11.475941 kubelet[3629]: E1216 12:15:11.475907 3629 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:15:11.536813 kubelet[3629]: I1216 12:15:11.536787 3629 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:15:11.536987 kubelet[3629]: I1216 12:15:11.536966 3629 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:15:11.537042 kubelet[3629]: I1216 12:15:11.537034 3629 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:15:11.537258 kubelet[3629]: I1216 12:15:11.537246 3629 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:15:11.537341 kubelet[3629]: I1216 12:15:11.537322 3629 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:15:11.537379 kubelet[3629]: I1216 12:15:11.537373 3629 policy_none.go:49] "None policy: Start" Dec 16 12:15:11.537451 kubelet[3629]: I1216 12:15:11.537442 3629 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 12:15:11.537515 kubelet[3629]: I1216 12:15:11.537493 3629 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 12:15:11.537702 kubelet[3629]: I1216 12:15:11.537690 3629 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 12:15:11.537848 kubelet[3629]: I1216 12:15:11.537813 3629 policy_none.go:47] "Start" Dec 16 12:15:11.541944 kubelet[3629]: E1216 12:15:11.541912 3629 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:15:11.542091 kubelet[3629]: I1216 12:15:11.542074 3629 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:15:11.542144 kubelet[3629]: I1216 12:15:11.542089 3629 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:15:11.542987 kubelet[3629]: I1216 12:15:11.542964 3629 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:15:11.546604 kubelet[3629]: E1216 12:15:11.546044 3629 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:15:11.577494 kubelet[3629]: I1216 12:15:11.577461 3629 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:11.577741 kubelet[3629]: I1216 12:15:11.577461 3629 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:11.577884 kubelet[3629]: I1216 12:15:11.577619 3629 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:11.591427 kubelet[3629]: I1216 12:15:11.591380 3629 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:15:11.591573 kubelet[3629]: E1216 12:15:11.591438 3629 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-a-623de6ebc0\" already exists" pod="kube-system/kube-apiserver-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:11.591573 kubelet[3629]: I1216 12:15:11.591550 3629 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:15:11.591573 kubelet[3629]: E1216 12:15:11.591570 3629 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-a-623de6ebc0\" already exists" pod="kube-system/kube-scheduler-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:11.592162 kubelet[3629]: I1216 12:15:11.592064 3629 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:15:11.592162 kubelet[3629]: E1216 12:15:11.592106 3629 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.0.0-a-623de6ebc0\" already exists" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:11.650941 kubelet[3629]: I1216 12:15:11.650575 3629 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:11.797400 kubelet[3629]: I1216 12:15:11.667605 3629 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:11.797400 kubelet[3629]: I1216 12:15:11.670054 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1bdff413b780b9a7675f4afc6d64004-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-a-623de6ebc0\" (UID: \"a1bdff413b780b9a7675f4afc6d64004\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:11.797400 kubelet[3629]: I1216 12:15:11.670083 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1bdff413b780b9a7675f4afc6d64004-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-a-623de6ebc0\" (UID: \"a1bdff413b780b9a7675f4afc6d64004\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:11.797400 kubelet[3629]: I1216 12:15:11.670109 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04568fe24d62bb1587e2baf926db6b90-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-623de6ebc0\" (UID: \"04568fe24d62bb1587e2baf926db6b90\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:11.797400 kubelet[3629]: I1216 12:15:11.670120 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1bdff413b780b9a7675f4afc6d64004-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-a-623de6ebc0\" (UID: \"a1bdff413b780b9a7675f4afc6d64004\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:11.797544 kubelet[3629]: I1216 12:15:11.670131 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/04568fe24d62bb1587e2baf926db6b90-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-a-623de6ebc0\" (UID: \"04568fe24d62bb1587e2baf926db6b90\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:11.797544 kubelet[3629]: I1216 12:15:11.670139 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04568fe24d62bb1587e2baf926db6b90-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-623de6ebc0\" (UID: \"04568fe24d62bb1587e2baf926db6b90\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:11.797544 kubelet[3629]: I1216 12:15:11.670148 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04568fe24d62bb1587e2baf926db6b90-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-a-623de6ebc0\" (UID: \"04568fe24d62bb1587e2baf926db6b90\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:11.797544 kubelet[3629]: I1216 12:15:11.670159 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04568fe24d62bb1587e2baf926db6b90-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-a-623de6ebc0\" (UID: \"04568fe24d62bb1587e2baf926db6b90\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:11.797544 kubelet[3629]: I1216 12:15:11.670170 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e97c8fe2fabc628ab5c0d0c5aa25a616-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-a-623de6ebc0\" (UID: \"e97c8fe2fabc628ab5c0d0c5aa25a616\") " pod="kube-system/kube-scheduler-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:11.797619 kubelet[3629]: I1216 12:15:11.796944 3629 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:12.442393 kubelet[3629]: I1216 12:15:12.442364 3629 apiserver.go:52] "Watching apiserver" Dec 16 12:15:12.469732 kubelet[3629]: I1216 12:15:12.469681 3629 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 12:15:12.516747 kubelet[3629]: I1216 12:15:12.516662 3629 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:12.533650 kubelet[3629]: I1216 12:15:12.533610 3629 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:15:12.533815 kubelet[3629]: E1216 12:15:12.533778 3629 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-a-623de6ebc0\" already exists" pod="kube-system/kube-apiserver-ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:12.545535 kubelet[3629]: I1216 12:15:12.545475 3629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547.0.0-a-623de6ebc0" podStartSLOduration=4.545461576 podStartE2EDuration="4.545461576s" podCreationTimestamp="2025-12-16 12:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:15:12.534663563 +0000 UTC m=+1.135656297" watchObservedRunningTime="2025-12-16 12:15:12.545461576 +0000 UTC m=+1.146454278" Dec 16 12:15:12.545829 kubelet[3629]: I1216 12:15:12.545795 3629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547.0.0-a-623de6ebc0" podStartSLOduration=4.545785472 podStartE2EDuration="4.545785472s" podCreationTimestamp="2025-12-16 12:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:15:12.545224609 +0000 UTC m=+1.146217311" watchObservedRunningTime="2025-12-16 12:15:12.545785472 +0000 UTC m=+1.146778174" Dec 16 12:15:12.571157 kubelet[3629]: I1216 12:15:12.570988 3629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-623de6ebc0" podStartSLOduration=4.570972746 podStartE2EDuration="4.570972746s" podCreationTimestamp="2025-12-16 12:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:15:12.558520419 +0000 UTC m=+1.159513129" watchObservedRunningTime="2025-12-16 12:15:12.570972746 +0000 UTC m=+1.171965472" Dec 16 12:15:15.626385 kubelet[3629]: I1216 12:15:15.625667 3629 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:15:15.627337 containerd[2077]: time="2025-12-16T12:15:15.627096535Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:15:15.628819 kubelet[3629]: I1216 12:15:15.628790 3629 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:15:16.713897 systemd[1]: Created slice kubepods-besteffort-pod8123dcbb_660c_4a7f_b988_61833e4e7243.slice - libcontainer container kubepods-besteffort-pod8123dcbb_660c_4a7f_b988_61833e4e7243.slice. Dec 16 12:15:16.801526 kubelet[3629]: I1216 12:15:16.801479 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8123dcbb-660c-4a7f-b988-61833e4e7243-xtables-lock\") pod \"kube-proxy-gf2mb\" (UID: \"8123dcbb-660c-4a7f-b988-61833e4e7243\") " pod="kube-system/kube-proxy-gf2mb" Dec 16 12:15:16.801526 kubelet[3629]: I1216 12:15:16.801520 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8123dcbb-660c-4a7f-b988-61833e4e7243-lib-modules\") pod \"kube-proxy-gf2mb\" (UID: \"8123dcbb-660c-4a7f-b988-61833e4e7243\") " pod="kube-system/kube-proxy-gf2mb" Dec 16 12:15:16.801526 kubelet[3629]: I1216 12:15:16.801535 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8123dcbb-660c-4a7f-b988-61833e4e7243-kube-proxy\") pod \"kube-proxy-gf2mb\" (UID: \"8123dcbb-660c-4a7f-b988-61833e4e7243\") " pod="kube-system/kube-proxy-gf2mb" Dec 16 12:15:16.801956 kubelet[3629]: I1216 12:15:16.801544 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6t2w\" (UniqueName: \"kubernetes.io/projected/8123dcbb-660c-4a7f-b988-61833e4e7243-kube-api-access-x6t2w\") pod \"kube-proxy-gf2mb\" (UID: \"8123dcbb-660c-4a7f-b988-61833e4e7243\") " pod="kube-system/kube-proxy-gf2mb" Dec 16 12:15:16.857966 systemd[1]: Created slice kubepods-besteffort-pod7ef11c49_6b3e_425a_861c_c986c7020135.slice - libcontainer container kubepods-besteffort-pod7ef11c49_6b3e_425a_861c_c986c7020135.slice. Dec 16 12:15:16.902221 kubelet[3629]: I1216 12:15:16.901839 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sljkj\" (UniqueName: \"kubernetes.io/projected/7ef11c49-6b3e-425a-861c-c986c7020135-kube-api-access-sljkj\") pod \"tigera-operator-65cdcdfd6d-498cd\" (UID: \"7ef11c49-6b3e-425a-861c-c986c7020135\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-498cd" Dec 16 12:15:16.902221 kubelet[3629]: I1216 12:15:16.901905 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7ef11c49-6b3e-425a-861c-c986c7020135-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-498cd\" (UID: \"7ef11c49-6b3e-425a-861c-c986c7020135\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-498cd" Dec 16 12:15:17.030558 containerd[2077]: time="2025-12-16T12:15:17.030454807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gf2mb,Uid:8123dcbb-660c-4a7f-b988-61833e4e7243,Namespace:kube-system,Attempt:0,}" Dec 16 12:15:17.071916 containerd[2077]: time="2025-12-16T12:15:17.071829676Z" level=info msg="connecting to shim ddc5b268d12e8733e8ccca42e0dd5fccb06605e3a5bfbed9e165d133bbd6ce17" address="unix:///run/containerd/s/493b26d46367d749735d10d81e17727b11fe930b438ddb8112b880c049807744" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:17.092941 systemd[1]: Started cri-containerd-ddc5b268d12e8733e8ccca42e0dd5fccb06605e3a5bfbed9e165d133bbd6ce17.scope - libcontainer container ddc5b268d12e8733e8ccca42e0dd5fccb06605e3a5bfbed9e165d133bbd6ce17. Dec 16 12:15:17.107321 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 12:15:17.107433 kernel: audit: type=1334 audit(1765887317.099:467): prog-id=157 op=LOAD Dec 16 12:15:17.099000 audit: BPF prog-id=157 op=LOAD Dec 16 12:15:17.107000 audit: BPF prog-id=158 op=LOAD Dec 16 12:15:17.113832 kernel: audit: type=1334 audit(1765887317.107:468): prog-id=158 op=LOAD Dec 16 12:15:17.107000 audit[3699]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3685 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.131095 kernel: audit: type=1300 audit(1765887317.107:468): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3685 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464633562323638643132653837333365386363636134326530646435 Dec 16 12:15:17.148109 kernel: audit: type=1327 audit(1765887317.107:468): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464633562323638643132653837333365386363636134326530646435 Dec 16 12:15:17.107000 audit: BPF prog-id=158 op=UNLOAD Dec 16 12:15:17.153029 kernel: audit: type=1334 audit(1765887317.107:469): prog-id=158 op=UNLOAD Dec 16 12:15:17.107000 audit[3699]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3685 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.169402 kernel: audit: type=1300 audit(1765887317.107:469): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3685 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464633562323638643132653837333365386363636134326530646435 Dec 16 12:15:17.186875 kernel: audit: type=1327 audit(1765887317.107:469): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464633562323638643132653837333365386363636134326530646435 Dec 16 12:15:17.107000 audit: BPF prog-id=159 op=LOAD Dec 16 12:15:17.192488 kernel: audit: type=1334 audit(1765887317.107:470): prog-id=159 op=LOAD Dec 16 12:15:17.193858 containerd[2077]: time="2025-12-16T12:15:17.193633532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-498cd,Uid:7ef11c49-6b3e-425a-861c-c986c7020135,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:15:17.107000 audit[3699]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3685 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.211590 kernel: audit: type=1300 audit(1765887317.107:470): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3685 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464633562323638643132653837333365386363636134326530646435 Dec 16 12:15:17.229272 kernel: audit: type=1327 audit(1765887317.107:470): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464633562323638643132653837333365386363636134326530646435 Dec 16 12:15:17.107000 audit: BPF prog-id=160 op=LOAD Dec 16 12:15:17.107000 audit[3699]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3685 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464633562323638643132653837333365386363636134326530646435 Dec 16 12:15:17.107000 audit: BPF prog-id=160 op=UNLOAD Dec 16 12:15:17.107000 audit[3699]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3685 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464633562323638643132653837333365386363636134326530646435 Dec 16 12:15:17.107000 audit: BPF prog-id=159 op=UNLOAD Dec 16 12:15:17.107000 audit[3699]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3685 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464633562323638643132653837333365386363636134326530646435 Dec 16 12:15:17.107000 audit: BPF prog-id=161 op=LOAD Dec 16 12:15:17.107000 audit[3699]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3685 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464633562323638643132653837333365386363636134326530646435 Dec 16 12:15:17.235338 containerd[2077]: time="2025-12-16T12:15:17.235303993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gf2mb,Uid:8123dcbb-660c-4a7f-b988-61833e4e7243,Namespace:kube-system,Attempt:0,} returns sandbox id \"ddc5b268d12e8733e8ccca42e0dd5fccb06605e3a5bfbed9e165d133bbd6ce17\"" Dec 16 12:15:17.245628 containerd[2077]: time="2025-12-16T12:15:17.245327444Z" level=info msg="CreateContainer within sandbox \"ddc5b268d12e8733e8ccca42e0dd5fccb06605e3a5bfbed9e165d133bbd6ce17\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:15:17.274510 containerd[2077]: time="2025-12-16T12:15:17.274473025Z" level=info msg="Container 5adf43a9c157a606563ef354c5d1552ed279f751f8e4cea2025f7dda16d51104: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:15:17.288597 containerd[2077]: time="2025-12-16T12:15:17.288152017Z" level=info msg="connecting to shim f817189585b836d8f012aa500aa07e638803066d4d314288d243c1b7aa32ad91" address="unix:///run/containerd/s/44a73bce38c42b0ca57d5891f2b0b7137f910176e12dd0766aa202c13459f607" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:17.297147 containerd[2077]: time="2025-12-16T12:15:17.297110052Z" level=info msg="CreateContainer within sandbox \"ddc5b268d12e8733e8ccca42e0dd5fccb06605e3a5bfbed9e165d133bbd6ce17\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5adf43a9c157a606563ef354c5d1552ed279f751f8e4cea2025f7dda16d51104\"" Dec 16 12:15:17.299002 containerd[2077]: time="2025-12-16T12:15:17.297923751Z" level=info msg="StartContainer for \"5adf43a9c157a606563ef354c5d1552ed279f751f8e4cea2025f7dda16d51104\"" Dec 16 12:15:17.301202 containerd[2077]: time="2025-12-16T12:15:17.301014662Z" level=info msg="connecting to shim 5adf43a9c157a606563ef354c5d1552ed279f751f8e4cea2025f7dda16d51104" address="unix:///run/containerd/s/493b26d46367d749735d10d81e17727b11fe930b438ddb8112b880c049807744" protocol=ttrpc version=3 Dec 16 12:15:17.309941 systemd[1]: Started cri-containerd-f817189585b836d8f012aa500aa07e638803066d4d314288d243c1b7aa32ad91.scope - libcontainer container f817189585b836d8f012aa500aa07e638803066d4d314288d243c1b7aa32ad91. Dec 16 12:15:17.324909 systemd[1]: Started cri-containerd-5adf43a9c157a606563ef354c5d1552ed279f751f8e4cea2025f7dda16d51104.scope - libcontainer container 5adf43a9c157a606563ef354c5d1552ed279f751f8e4cea2025f7dda16d51104. Dec 16 12:15:17.325000 audit: BPF prog-id=162 op=LOAD Dec 16 12:15:17.327000 audit: BPF prog-id=163 op=LOAD Dec 16 12:15:17.327000 audit[3744]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3733 pid=3744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638313731383935383562383336643866303132616135303061613037 Dec 16 12:15:17.327000 audit: BPF prog-id=163 op=UNLOAD Dec 16 12:15:17.327000 audit[3744]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3733 pid=3744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638313731383935383562383336643866303132616135303061613037 Dec 16 12:15:17.327000 audit: BPF prog-id=164 op=LOAD Dec 16 12:15:17.327000 audit[3744]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3733 pid=3744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638313731383935383562383336643866303132616135303061613037 Dec 16 12:15:17.327000 audit: BPF prog-id=165 op=LOAD Dec 16 12:15:17.327000 audit[3744]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3733 pid=3744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638313731383935383562383336643866303132616135303061613037 Dec 16 12:15:17.327000 audit: BPF prog-id=165 op=UNLOAD Dec 16 12:15:17.327000 audit[3744]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3733 pid=3744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638313731383935383562383336643866303132616135303061613037 Dec 16 12:15:17.327000 audit: BPF prog-id=164 op=UNLOAD Dec 16 12:15:17.327000 audit[3744]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3733 pid=3744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638313731383935383562383336643866303132616135303061613037 Dec 16 12:15:17.327000 audit: BPF prog-id=166 op=LOAD Dec 16 12:15:17.327000 audit[3744]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3733 pid=3744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638313731383935383562383336643866303132616135303061613037 Dec 16 12:15:17.357126 containerd[2077]: time="2025-12-16T12:15:17.357076631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-498cd,Uid:7ef11c49-6b3e-425a-861c-c986c7020135,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f817189585b836d8f012aa500aa07e638803066d4d314288d243c1b7aa32ad91\"" Dec 16 12:15:17.360156 containerd[2077]: time="2025-12-16T12:15:17.359964405Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:15:17.362000 audit: BPF prog-id=167 op=LOAD Dec 16 12:15:17.362000 audit[3756]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3685 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561646634336139633135376136303635363365663335346335643135 Dec 16 12:15:17.363000 audit: BPF prog-id=168 op=LOAD Dec 16 12:15:17.363000 audit[3756]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3685 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561646634336139633135376136303635363365663335346335643135 Dec 16 12:15:17.363000 audit: BPF prog-id=168 op=UNLOAD Dec 16 12:15:17.363000 audit[3756]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3685 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561646634336139633135376136303635363365663335346335643135 Dec 16 12:15:17.363000 audit: BPF prog-id=167 op=UNLOAD Dec 16 12:15:17.363000 audit[3756]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3685 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561646634336139633135376136303635363365663335346335643135 Dec 16 12:15:17.363000 audit: BPF prog-id=169 op=LOAD Dec 16 12:15:17.363000 audit[3756]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3685 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561646634336139633135376136303635363365663335346335643135 Dec 16 12:15:17.384059 containerd[2077]: time="2025-12-16T12:15:17.383942191Z" level=info msg="StartContainer for \"5adf43a9c157a606563ef354c5d1552ed279f751f8e4cea2025f7dda16d51104\" returns successfully" Dec 16 12:15:17.578000 audit[3834]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=3834 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.578000 audit[3834]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdb732b30 a2=0 a3=1 items=0 ppid=3775 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.578000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:15:17.579000 audit[3835]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3835 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.579000 audit[3835]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcbc8a930 a2=0 a3=1 items=0 ppid=3775 pid=3835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.579000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:15:17.580000 audit[3837]: NETFILTER_CFG table=mangle:59 family=10 entries=1 op=nft_register_chain pid=3837 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.580000 audit[3837]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff2c861e0 a2=0 a3=1 items=0 ppid=3775 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.580000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:15:17.581000 audit[3838]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3838 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.581000 audit[3838]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd6f0a740 a2=0 a3=1 items=0 ppid=3775 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.581000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:15:17.583000 audit[3839]: NETFILTER_CFG table=nat:61 family=10 entries=1 op=nft_register_chain pid=3839 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.583000 audit[3839]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff4420be0 a2=0 a3=1 items=0 ppid=3775 pid=3839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.583000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:15:17.585000 audit[3842]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=3842 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.585000 audit[3842]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe921ca30 a2=0 a3=1 items=0 ppid=3775 pid=3842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.585000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:15:17.687000 audit[3843]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3843 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.687000 audit[3843]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffeaebc0f0 a2=0 a3=1 items=0 ppid=3775 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.687000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:15:17.689000 audit[3845]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3845 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.689000 audit[3845]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffa7e5cd0 a2=0 a3=1 items=0 ppid=3775 pid=3845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.689000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Dec 16 12:15:17.693000 audit[3848]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=3848 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.693000 audit[3848]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc320c500 a2=0 a3=1 items=0 ppid=3775 pid=3848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.693000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 16 12:15:17.694000 audit[3849]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=3849 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.694000 audit[3849]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd3b83460 a2=0 a3=1 items=0 ppid=3775 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.694000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:15:17.696000 audit[3851]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3851 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.696000 audit[3851]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe3409d40 a2=0 a3=1 items=0 ppid=3775 pid=3851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.696000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:15:17.697000 audit[3852]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3852 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.697000 audit[3852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe61e7a60 a2=0 a3=1 items=0 ppid=3775 pid=3852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.697000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:15:17.699000 audit[3854]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3854 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.699000 audit[3854]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdeba4960 a2=0 a3=1 items=0 ppid=3775 pid=3854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.699000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:15:17.702000 audit[3857]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=3857 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.702000 audit[3857]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd4925300 a2=0 a3=1 items=0 ppid=3775 pid=3857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.702000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:15:17.704000 audit[3858]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=3858 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.704000 audit[3858]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe26f35b0 a2=0 a3=1 items=0 ppid=3775 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.704000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:15:17.707000 audit[3860]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3860 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.707000 audit[3860]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcf399220 a2=0 a3=1 items=0 ppid=3775 pid=3860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.707000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:15:17.708000 audit[3861]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=3861 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.708000 audit[3861]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd08d88e0 a2=0 a3=1 items=0 ppid=3775 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.708000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:15:17.710000 audit[3863]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=3863 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.710000 audit[3863]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffedf7d110 a2=0 a3=1 items=0 ppid=3775 pid=3863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.710000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Dec 16 12:15:17.714000 audit[3866]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=3866 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.714000 audit[3866]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdbbd0380 a2=0 a3=1 items=0 ppid=3775 pid=3866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.714000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 16 12:15:17.717000 audit[3869]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=3869 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.717000 audit[3869]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe105baf0 a2=0 a3=1 items=0 ppid=3775 pid=3869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.717000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 16 12:15:17.718000 audit[3870]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3870 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.718000 audit[3870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc980de70 a2=0 a3=1 items=0 ppid=3775 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.718000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:15:17.720000 audit[3872]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3872 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.720000 audit[3872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff1067e90 a2=0 a3=1 items=0 ppid=3775 pid=3872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.720000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:15:17.723000 audit[3875]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=3875 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.723000 audit[3875]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc9620630 a2=0 a3=1 items=0 ppid=3775 pid=3875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.723000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:15:17.725000 audit[3876]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=3876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.725000 audit[3876]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe87d7060 a2=0 a3=1 items=0 ppid=3775 pid=3876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.725000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:15:17.727000 audit[3878]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=3878 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:17.727000 audit[3878]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffeaa2d230 a2=0 a3=1 items=0 ppid=3775 pid=3878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.727000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:15:17.805000 audit[3884]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=3884 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:17.805000 audit[3884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff37fcd90 a2=0 a3=1 items=0 ppid=3775 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.805000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:17.827000 audit[3884]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=3884 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:17.827000 audit[3884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffff37fcd90 a2=0 a3=1 items=0 ppid=3775 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.827000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:17.828000 audit[3889]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3889 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.828000 audit[3889]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffeb396eb0 a2=0 a3=1 items=0 ppid=3775 pid=3889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.828000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:15:17.832000 audit[3891]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=3891 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.832000 audit[3891]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc0cb0fa0 a2=0 a3=1 items=0 ppid=3775 pid=3891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.832000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 16 12:15:17.835000 audit[3894]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3894 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.835000 audit[3894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd5363ff0 a2=0 a3=1 items=0 ppid=3775 pid=3894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.835000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Dec 16 12:15:17.836000 audit[3895]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3895 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.836000 audit[3895]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffff0efbc0 a2=0 a3=1 items=0 ppid=3775 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.836000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:15:17.838000 audit[3897]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3897 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.838000 audit[3897]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffcb252c0 a2=0 a3=1 items=0 ppid=3775 pid=3897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.838000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:15:17.839000 audit[3898]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3898 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.839000 audit[3898]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6cac380 a2=0 a3=1 items=0 ppid=3775 pid=3898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.839000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:15:17.841000 audit[3900]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3900 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.841000 audit[3900]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffeb7c8a00 a2=0 a3=1 items=0 ppid=3775 pid=3900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.841000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:15:17.844000 audit[3903]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=3903 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.844000 audit[3903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffd61d2590 a2=0 a3=1 items=0 ppid=3775 pid=3903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.844000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:15:17.845000 audit[3904]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=3904 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.845000 audit[3904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff87b3aa0 a2=0 a3=1 items=0 ppid=3775 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.845000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:15:17.847000 audit[3906]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3906 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.847000 audit[3906]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffda111c90 a2=0 a3=1 items=0 ppid=3775 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.847000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:15:17.848000 audit[3907]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=3907 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.848000 audit[3907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffacca450 a2=0 a3=1 items=0 ppid=3775 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.848000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:15:17.850000 audit[3909]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=3909 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.850000 audit[3909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdebb8850 a2=0 a3=1 items=0 ppid=3775 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.850000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 16 12:15:17.853000 audit[3912]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=3912 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.853000 audit[3912]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd23c40c0 a2=0 a3=1 items=0 ppid=3775 pid=3912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.853000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 16 12:15:17.856000 audit[3915]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=3915 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.856000 audit[3915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe5c4d500 a2=0 a3=1 items=0 ppid=3775 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.856000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Dec 16 12:15:17.857000 audit[3916]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3916 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.857000 audit[3916]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd77dbfd0 a2=0 a3=1 items=0 ppid=3775 pid=3916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.857000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:15:17.859000 audit[3918]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=3918 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.859000 audit[3918]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffec6c6a90 a2=0 a3=1 items=0 ppid=3775 pid=3918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.859000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:15:17.862000 audit[3921]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=3921 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.862000 audit[3921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd8fc2000 a2=0 a3=1 items=0 ppid=3775 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.862000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:15:17.864000 audit[3922]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=3922 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.864000 audit[3922]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe97d98e0 a2=0 a3=1 items=0 ppid=3775 pid=3922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.864000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:15:17.866000 audit[3924]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=3924 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.866000 audit[3924]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffcdf28520 a2=0 a3=1 items=0 ppid=3775 pid=3924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.866000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:15:17.867000 audit[3925]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=3925 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.867000 audit[3925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdca0e090 a2=0 a3=1 items=0 ppid=3775 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.867000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:15:17.869000 audit[3927]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=3927 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.869000 audit[3927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd1c644f0 a2=0 a3=1 items=0 ppid=3775 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.869000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:15:17.872000 audit[3930]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=3930 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:17.872000 audit[3930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff9a918c0 a2=0 a3=1 items=0 ppid=3775 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.872000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:15:17.874000 audit[3932]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=3932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:15:17.874000 audit[3932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffd410eac0 a2=0 a3=1 items=0 ppid=3775 pid=3932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.874000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:17.875000 audit[3932]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=3932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:15:17.875000 audit[3932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd410eac0 a2=0 a3=1 items=0 ppid=3775 pid=3932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:17.875000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:19.366560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount514241253.mount: Deactivated successfully. Dec 16 12:15:19.812111 containerd[2077]: time="2025-12-16T12:15:19.812064087Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:19.817015 containerd[2077]: time="2025-12-16T12:15:19.816245736Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 16 12:15:19.819808 containerd[2077]: time="2025-12-16T12:15:19.819773347Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:19.824104 containerd[2077]: time="2025-12-16T12:15:19.824075486Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:19.824391 containerd[2077]: time="2025-12-16T12:15:19.824365126Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.46436639s" Dec 16 12:15:19.824391 containerd[2077]: time="2025-12-16T12:15:19.824391536Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 12:15:19.832657 containerd[2077]: time="2025-12-16T12:15:19.832626631Z" level=info msg="CreateContainer within sandbox \"f817189585b836d8f012aa500aa07e638803066d4d314288d243c1b7aa32ad91\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:15:19.851479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1584898139.mount: Deactivated successfully. Dec 16 12:15:19.854416 containerd[2077]: time="2025-12-16T12:15:19.854001122Z" level=info msg="Container 4e2a7d18ff6a1b02e28cbd67484e26a6bfb524d5fec5fa0aab832e9204d59689: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:15:19.872254 containerd[2077]: time="2025-12-16T12:15:19.872207728Z" level=info msg="CreateContainer within sandbox \"f817189585b836d8f012aa500aa07e638803066d4d314288d243c1b7aa32ad91\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4e2a7d18ff6a1b02e28cbd67484e26a6bfb524d5fec5fa0aab832e9204d59689\"" Dec 16 12:15:19.873025 containerd[2077]: time="2025-12-16T12:15:19.872832068Z" level=info msg="StartContainer for \"4e2a7d18ff6a1b02e28cbd67484e26a6bfb524d5fec5fa0aab832e9204d59689\"" Dec 16 12:15:19.875300 containerd[2077]: time="2025-12-16T12:15:19.875262988Z" level=info msg="connecting to shim 4e2a7d18ff6a1b02e28cbd67484e26a6bfb524d5fec5fa0aab832e9204d59689" address="unix:///run/containerd/s/44a73bce38c42b0ca57d5891f2b0b7137f910176e12dd0766aa202c13459f607" protocol=ttrpc version=3 Dec 16 12:15:19.890922 systemd[1]: Started cri-containerd-4e2a7d18ff6a1b02e28cbd67484e26a6bfb524d5fec5fa0aab832e9204d59689.scope - libcontainer container 4e2a7d18ff6a1b02e28cbd67484e26a6bfb524d5fec5fa0aab832e9204d59689. Dec 16 12:15:19.898000 audit: BPF prog-id=170 op=LOAD Dec 16 12:15:19.899000 audit: BPF prog-id=171 op=LOAD Dec 16 12:15:19.899000 audit[3941]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=3733 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:19.899000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465326137643138666636613162303265323863626436373438346532 Dec 16 12:15:19.899000 audit: BPF prog-id=171 op=UNLOAD Dec 16 12:15:19.899000 audit[3941]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3733 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:19.899000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465326137643138666636613162303265323863626436373438346532 Dec 16 12:15:19.899000 audit: BPF prog-id=172 op=LOAD Dec 16 12:15:19.899000 audit[3941]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=3733 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:19.899000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465326137643138666636613162303265323863626436373438346532 Dec 16 12:15:19.899000 audit: BPF prog-id=173 op=LOAD Dec 16 12:15:19.899000 audit[3941]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=3733 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:19.899000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465326137643138666636613162303265323863626436373438346532 Dec 16 12:15:19.899000 audit: BPF prog-id=173 op=UNLOAD Dec 16 12:15:19.899000 audit[3941]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3733 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:19.899000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465326137643138666636613162303265323863626436373438346532 Dec 16 12:15:19.899000 audit: BPF prog-id=172 op=UNLOAD Dec 16 12:15:19.899000 audit[3941]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3733 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:19.899000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465326137643138666636613162303265323863626436373438346532 Dec 16 12:15:19.899000 audit: BPF prog-id=174 op=LOAD Dec 16 12:15:19.899000 audit[3941]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=3733 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:19.899000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465326137643138666636613162303265323863626436373438346532 Dec 16 12:15:19.929998 containerd[2077]: time="2025-12-16T12:15:19.929957380Z" level=info msg="StartContainer for \"4e2a7d18ff6a1b02e28cbd67484e26a6bfb524d5fec5fa0aab832e9204d59689\" returns successfully" Dec 16 12:15:20.351443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2725634691.mount: Deactivated successfully. Dec 16 12:15:20.545078 kubelet[3629]: I1216 12:15:20.545018 3629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gf2mb" podStartSLOduration=4.54500195 podStartE2EDuration="4.54500195s" podCreationTimestamp="2025-12-16 12:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:15:17.550836439 +0000 UTC m=+6.151829141" watchObservedRunningTime="2025-12-16 12:15:20.54500195 +0000 UTC m=+9.145994692" Dec 16 12:15:20.545535 kubelet[3629]: I1216 12:15:20.545096 3629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-498cd" podStartSLOduration=2.078202343 podStartE2EDuration="4.545092949s" podCreationTimestamp="2025-12-16 12:15:16 +0000 UTC" firstStartedPulling="2025-12-16 12:15:17.358608725 +0000 UTC m=+5.959601427" lastFinishedPulling="2025-12-16 12:15:19.825499331 +0000 UTC m=+8.426492033" observedRunningTime="2025-12-16 12:15:20.544998149 +0000 UTC m=+9.145990851" watchObservedRunningTime="2025-12-16 12:15:20.545092949 +0000 UTC m=+9.146085651" Dec 16 12:15:25.140092 sudo[2586]: pam_unix(sudo:session): session closed for user root Dec 16 12:15:25.156420 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 12:15:25.156572 kernel: audit: type=1106 audit(1765887325.139:547): pid=2586 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:15:25.139000 audit[2586]: USER_END pid=2586 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:15:25.139000 audit[2586]: CRED_DISP pid=2586 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:15:25.173221 kernel: audit: type=1104 audit(1765887325.139:548): pid=2586 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:15:25.222778 sshd[2585]: Connection closed by 10.200.16.10 port 43992 Dec 16 12:15:25.224450 sshd-session[2581]: pam_unix(sshd:session): session closed for user core Dec 16 12:15:25.227000 audit[2581]: USER_END pid=2581 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:15:25.233318 systemd[1]: sshd@6-10.200.20.36:22-10.200.16.10:43992.service: Deactivated successfully. Dec 16 12:15:25.241625 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:15:25.243848 systemd[1]: session-10.scope: Consumed 4.859s CPU time, 219.7M memory peak. Dec 16 12:15:25.252394 systemd-logind[2038]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:15:25.227000 audit[2581]: CRED_DISP pid=2581 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:15:25.254584 systemd-logind[2038]: Removed session 10. Dec 16 12:15:25.269379 kernel: audit: type=1106 audit(1765887325.227:549): pid=2581 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:15:25.269471 kernel: audit: type=1104 audit(1765887325.227:550): pid=2581 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:15:25.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.36:22-10.200.16.10:43992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:25.285037 kernel: audit: type=1131 audit(1765887325.234:551): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.36:22-10.200.16.10:43992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:26.678000 audit[4022]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4022 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:26.678000 audit[4022]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffce6c9250 a2=0 a3=1 items=0 ppid=3775 pid=4022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:26.711273 kernel: audit: type=1325 audit(1765887326.678:552): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4022 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:26.711468 kernel: audit: type=1300 audit(1765887326.678:552): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffce6c9250 a2=0 a3=1 items=0 ppid=3775 pid=4022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:26.678000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:26.722822 kernel: audit: type=1327 audit(1765887326.678:552): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:26.699000 audit[4022]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4022 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:26.733789 kernel: audit: type=1325 audit(1765887326.699:553): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4022 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:26.699000 audit[4022]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffce6c9250 a2=0 a3=1 items=0 ppid=3775 pid=4022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:26.754808 kernel: audit: type=1300 audit(1765887326.699:553): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffce6c9250 a2=0 a3=1 items=0 ppid=3775 pid=4022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:26.699000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:27.810000 audit[4024]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4024 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:27.810000 audit[4024]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffc7062a0 a2=0 a3=1 items=0 ppid=3775 pid=4024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.810000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:27.815000 audit[4024]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4024 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:27.815000 audit[4024]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffc7062a0 a2=0 a3=1 items=0 ppid=3775 pid=4024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.815000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:29.565000 audit[4026]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4026 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:29.565000 audit[4026]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffb1ebbe0 a2=0 a3=1 items=0 ppid=3775 pid=4026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:29.565000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:29.570000 audit[4026]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4026 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:29.570000 audit[4026]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffb1ebbe0 a2=0 a3=1 items=0 ppid=3775 pid=4026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:29.570000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:30.579000 audit[4028]: NETFILTER_CFG table=filter:114 family=2 entries=19 op=nft_register_rule pid=4028 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:30.583984 kernel: kauditd_printk_skb: 13 callbacks suppressed Dec 16 12:15:30.584073 kernel: audit: type=1325 audit(1765887330.579:558): table=filter:114 family=2 entries=19 op=nft_register_rule pid=4028 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:30.579000 audit[4028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc4c5fa20 a2=0 a3=1 items=0 ppid=3775 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:30.579000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:30.624539 kernel: audit: type=1300 audit(1765887330.579:558): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc4c5fa20 a2=0 a3=1 items=0 ppid=3775 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:30.624657 kernel: audit: type=1327 audit(1765887330.579:558): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:30.615000 audit[4028]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4028 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:30.635861 kernel: audit: type=1325 audit(1765887330.615:559): table=nat:115 family=2 entries=12 op=nft_register_rule pid=4028 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:30.615000 audit[4028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc4c5fa20 a2=0 a3=1 items=0 ppid=3775 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:30.615000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:30.664539 kernel: audit: type=1300 audit(1765887330.615:559): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc4c5fa20 a2=0 a3=1 items=0 ppid=3775 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:30.664658 kernel: audit: type=1327 audit(1765887330.615:559): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:31.533773 systemd[1]: Created slice kubepods-besteffort-podce86a9ef_6d65_4cb6_be1e_6316fd507fbb.slice - libcontainer container kubepods-besteffort-podce86a9ef_6d65_4cb6_be1e_6316fd507fbb.slice. Dec 16 12:15:31.595406 kubelet[3629]: I1216 12:15:31.595359 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce86a9ef-6d65-4cb6-be1e-6316fd507fbb-tigera-ca-bundle\") pod \"calico-typha-545686595d-kr8z2\" (UID: \"ce86a9ef-6d65-4cb6-be1e-6316fd507fbb\") " pod="calico-system/calico-typha-545686595d-kr8z2" Dec 16 12:15:31.595406 kubelet[3629]: I1216 12:15:31.595402 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk47b\" (UniqueName: \"kubernetes.io/projected/ce86a9ef-6d65-4cb6-be1e-6316fd507fbb-kube-api-access-hk47b\") pod \"calico-typha-545686595d-kr8z2\" (UID: \"ce86a9ef-6d65-4cb6-be1e-6316fd507fbb\") " pod="calico-system/calico-typha-545686595d-kr8z2" Dec 16 12:15:31.595406 kubelet[3629]: I1216 12:15:31.595416 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ce86a9ef-6d65-4cb6-be1e-6316fd507fbb-typha-certs\") pod \"calico-typha-545686595d-kr8z2\" (UID: \"ce86a9ef-6d65-4cb6-be1e-6316fd507fbb\") " pod="calico-system/calico-typha-545686595d-kr8z2" Dec 16 12:15:31.641000 audit[4031]: NETFILTER_CFG table=filter:116 family=2 entries=21 op=nft_register_rule pid=4031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:31.641000 audit[4031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffeae221d0 a2=0 a3=1 items=0 ppid=3775 pid=4031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:31.671185 kernel: audit: type=1325 audit(1765887331.641:560): table=filter:116 family=2 entries=21 op=nft_register_rule pid=4031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:31.671327 kernel: audit: type=1300 audit(1765887331.641:560): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffeae221d0 a2=0 a3=1 items=0 ppid=3775 pid=4031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:31.641000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:31.685440 kernel: audit: type=1327 audit(1765887331.641:560): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:31.677000 audit[4031]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:31.697935 kernel: audit: type=1325 audit(1765887331.677:561): table=nat:117 family=2 entries=12 op=nft_register_rule pid=4031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:31.677000 audit[4031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffeae221d0 a2=0 a3=1 items=0 ppid=3775 pid=4031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:31.677000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:31.743841 systemd[1]: Created slice kubepods-besteffort-pod49b2d360_9637_4758_9da4_648567149f25.slice - libcontainer container kubepods-besteffort-pod49b2d360_9637_4758_9da4_648567149f25.slice. Dec 16 12:15:31.847820 containerd[2077]: time="2025-12-16T12:15:31.847781432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-545686595d-kr8z2,Uid:ce86a9ef-6d65-4cb6-be1e-6316fd507fbb,Namespace:calico-system,Attempt:0,}" Dec 16 12:15:31.897945 containerd[2077]: time="2025-12-16T12:15:31.897902575Z" level=info msg="connecting to shim 23229cd8aef38f1c9fe98f23aaba77ff30226b874e9daa0cc0e76c48315f938d" address="unix:///run/containerd/s/6c79ab2069d1785ec46c8372875ef14904afc8251af16052155289de50c16954" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:31.900904 kubelet[3629]: I1216 12:15:31.900643 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/49b2d360-9637-4758-9da4-648567149f25-node-certs\") pod \"calico-node-jhbwd\" (UID: \"49b2d360-9637-4758-9da4-648567149f25\") " pod="calico-system/calico-node-jhbwd" Dec 16 12:15:31.900904 kubelet[3629]: I1216 12:15:31.900802 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/49b2d360-9637-4758-9da4-648567149f25-cni-net-dir\") pod \"calico-node-jhbwd\" (UID: \"49b2d360-9637-4758-9da4-648567149f25\") " pod="calico-system/calico-node-jhbwd" Dec 16 12:15:31.901357 kubelet[3629]: I1216 12:15:31.901147 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49b2d360-9637-4758-9da4-648567149f25-lib-modules\") pod \"calico-node-jhbwd\" (UID: \"49b2d360-9637-4758-9da4-648567149f25\") " pod="calico-system/calico-node-jhbwd" Dec 16 12:15:31.901357 kubelet[3629]: I1216 12:15:31.901184 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49b2d360-9637-4758-9da4-648567149f25-tigera-ca-bundle\") pod \"calico-node-jhbwd\" (UID: \"49b2d360-9637-4758-9da4-648567149f25\") " pod="calico-system/calico-node-jhbwd" Dec 16 12:15:31.901357 kubelet[3629]: I1216 12:15:31.901203 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/49b2d360-9637-4758-9da4-648567149f25-flexvol-driver-host\") pod \"calico-node-jhbwd\" (UID: \"49b2d360-9637-4758-9da4-648567149f25\") " pod="calico-system/calico-node-jhbwd" Dec 16 12:15:31.901357 kubelet[3629]: I1216 12:15:31.901222 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/49b2d360-9637-4758-9da4-648567149f25-var-run-calico\") pod \"calico-node-jhbwd\" (UID: \"49b2d360-9637-4758-9da4-648567149f25\") " pod="calico-system/calico-node-jhbwd" Dec 16 12:15:31.901357 kubelet[3629]: I1216 12:15:31.901241 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/49b2d360-9637-4758-9da4-648567149f25-var-lib-calico\") pod \"calico-node-jhbwd\" (UID: \"49b2d360-9637-4758-9da4-648567149f25\") " pod="calico-system/calico-node-jhbwd" Dec 16 12:15:31.901651 kubelet[3629]: I1216 12:15:31.901502 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/49b2d360-9637-4758-9da4-648567149f25-policysync\") pod \"calico-node-jhbwd\" (UID: \"49b2d360-9637-4758-9da4-648567149f25\") " pod="calico-system/calico-node-jhbwd" Dec 16 12:15:31.901651 kubelet[3629]: I1216 12:15:31.901540 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49b2d360-9637-4758-9da4-648567149f25-xtables-lock\") pod \"calico-node-jhbwd\" (UID: \"49b2d360-9637-4758-9da4-648567149f25\") " pod="calico-system/calico-node-jhbwd" Dec 16 12:15:31.901651 kubelet[3629]: I1216 12:15:31.901553 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/49b2d360-9637-4758-9da4-648567149f25-cni-bin-dir\") pod \"calico-node-jhbwd\" (UID: \"49b2d360-9637-4758-9da4-648567149f25\") " pod="calico-system/calico-node-jhbwd" Dec 16 12:15:31.901820 kubelet[3629]: I1216 12:15:31.901768 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/49b2d360-9637-4758-9da4-648567149f25-cni-log-dir\") pod \"calico-node-jhbwd\" (UID: \"49b2d360-9637-4758-9da4-648567149f25\") " pod="calico-system/calico-node-jhbwd" Dec 16 12:15:31.902052 kubelet[3629]: I1216 12:15:31.901997 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nfwb\" (UniqueName: \"kubernetes.io/projected/49b2d360-9637-4758-9da4-648567149f25-kube-api-access-8nfwb\") pod \"calico-node-jhbwd\" (UID: \"49b2d360-9637-4758-9da4-648567149f25\") " pod="calico-system/calico-node-jhbwd" Dec 16 12:15:31.927948 systemd[1]: Started cri-containerd-23229cd8aef38f1c9fe98f23aaba77ff30226b874e9daa0cc0e76c48315f938d.scope - libcontainer container 23229cd8aef38f1c9fe98f23aaba77ff30226b874e9daa0cc0e76c48315f938d. Dec 16 12:15:31.944255 kubelet[3629]: E1216 12:15:31.944093 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lft87" podUID="8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85" Dec 16 12:15:31.952000 audit: BPF prog-id=175 op=LOAD Dec 16 12:15:31.953000 audit: BPF prog-id=176 op=LOAD Dec 16 12:15:31.953000 audit[4053]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=4042 pid=4053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:31.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233323239636438616566333866316339666539386632336161626137 Dec 16 12:15:31.953000 audit: BPF prog-id=176 op=UNLOAD Dec 16 12:15:31.953000 audit[4053]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4042 pid=4053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:31.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233323239636438616566333866316339666539386632336161626137 Dec 16 12:15:31.953000 audit: BPF prog-id=177 op=LOAD Dec 16 12:15:31.953000 audit[4053]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4042 pid=4053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:31.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233323239636438616566333866316339666539386632336161626137 Dec 16 12:15:31.954000 audit: BPF prog-id=178 op=LOAD Dec 16 12:15:31.954000 audit[4053]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4042 pid=4053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:31.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233323239636438616566333866316339666539386632336161626137 Dec 16 12:15:31.954000 audit: BPF prog-id=178 op=UNLOAD Dec 16 12:15:31.954000 audit[4053]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4042 pid=4053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:31.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233323239636438616566333866316339666539386632336161626137 Dec 16 12:15:31.954000 audit: BPF prog-id=177 op=UNLOAD Dec 16 12:15:31.954000 audit[4053]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4042 pid=4053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:31.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233323239636438616566333866316339666539386632336161626137 Dec 16 12:15:31.954000 audit: BPF prog-id=179 op=LOAD Dec 16 12:15:31.954000 audit[4053]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4042 pid=4053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:31.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233323239636438616566333866316339666539386632336161626137 Dec 16 12:15:31.988030 containerd[2077]: time="2025-12-16T12:15:31.987982882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-545686595d-kr8z2,Uid:ce86a9ef-6d65-4cb6-be1e-6316fd507fbb,Namespace:calico-system,Attempt:0,} returns sandbox id \"23229cd8aef38f1c9fe98f23aaba77ff30226b874e9daa0cc0e76c48315f938d\"" Dec 16 12:15:31.990305 containerd[2077]: time="2025-12-16T12:15:31.990124984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:15:32.003235 kubelet[3629]: I1216 12:15:32.003142 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85-registration-dir\") pod \"csi-node-driver-lft87\" (UID: \"8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85\") " pod="calico-system/csi-node-driver-lft87" Dec 16 12:15:32.003235 kubelet[3629]: I1216 12:15:32.003222 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85-varrun\") pod \"csi-node-driver-lft87\" (UID: \"8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85\") " pod="calico-system/csi-node-driver-lft87" Dec 16 12:15:32.003235 kubelet[3629]: I1216 12:15:32.003235 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtjxh\" (UniqueName: \"kubernetes.io/projected/8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85-kube-api-access-wtjxh\") pod \"csi-node-driver-lft87\" (UID: \"8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85\") " pod="calico-system/csi-node-driver-lft87" Dec 16 12:15:32.003401 kubelet[3629]: I1216 12:15:32.003265 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85-kubelet-dir\") pod \"csi-node-driver-lft87\" (UID: \"8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85\") " pod="calico-system/csi-node-driver-lft87" Dec 16 12:15:32.003401 kubelet[3629]: I1216 12:15:32.003275 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85-socket-dir\") pod \"csi-node-driver-lft87\" (UID: \"8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85\") " pod="calico-system/csi-node-driver-lft87" Dec 16 12:15:32.011426 kubelet[3629]: E1216 12:15:32.010845 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.011426 kubelet[3629]: W1216 12:15:32.010869 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.011426 kubelet[3629]: E1216 12:15:32.010894 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.011426 kubelet[3629]: E1216 12:15:32.011085 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.011426 kubelet[3629]: W1216 12:15:32.011092 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.011426 kubelet[3629]: E1216 12:15:32.011103 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.012271 kubelet[3629]: E1216 12:15:32.012140 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.012519 kubelet[3629]: W1216 12:15:32.012446 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.012628 kubelet[3629]: E1216 12:15:32.012603 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.012910 kubelet[3629]: E1216 12:15:32.012898 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.013071 kubelet[3629]: W1216 12:15:32.012996 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.013071 kubelet[3629]: E1216 12:15:32.013024 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.019297 kubelet[3629]: E1216 12:15:32.019275 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.019297 kubelet[3629]: W1216 12:15:32.019293 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.019383 kubelet[3629]: E1216 12:15:32.019304 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.053951 containerd[2077]: time="2025-12-16T12:15:32.053912312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jhbwd,Uid:49b2d360-9637-4758-9da4-648567149f25,Namespace:calico-system,Attempt:0,}" Dec 16 12:15:32.093311 containerd[2077]: time="2025-12-16T12:15:32.093260566Z" level=info msg="connecting to shim 32b10c45a1587f1bb2c44a12858dd7682d9b57a0bc1568824e671f821fd31812" address="unix:///run/containerd/s/a6987e319db5cb52b4a46f03281cde03578d01c5c997b939501eaca305e63c57" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:32.105317 kubelet[3629]: E1216 12:15:32.103871 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.105317 kubelet[3629]: W1216 12:15:32.104236 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.105317 kubelet[3629]: E1216 12:15:32.104260 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.106051 kubelet[3629]: E1216 12:15:32.105905 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.106051 kubelet[3629]: W1216 12:15:32.105983 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.106051 kubelet[3629]: E1216 12:15:32.106000 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.106302 kubelet[3629]: E1216 12:15:32.106270 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.106302 kubelet[3629]: W1216 12:15:32.106296 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.106369 kubelet[3629]: E1216 12:15:32.106307 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.106463 kubelet[3629]: E1216 12:15:32.106446 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.106463 kubelet[3629]: W1216 12:15:32.106455 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.106463 kubelet[3629]: E1216 12:15:32.106462 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.106623 kubelet[3629]: E1216 12:15:32.106606 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.106623 kubelet[3629]: W1216 12:15:32.106614 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.106623 kubelet[3629]: E1216 12:15:32.106621 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.106810 kubelet[3629]: E1216 12:15:32.106798 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.106810 kubelet[3629]: W1216 12:15:32.106807 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.107375 kubelet[3629]: E1216 12:15:32.106814 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.107375 kubelet[3629]: E1216 12:15:32.106963 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.107375 kubelet[3629]: W1216 12:15:32.106970 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.107375 kubelet[3629]: E1216 12:15:32.106977 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.107639 kubelet[3629]: E1216 12:15:32.107528 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.107639 kubelet[3629]: W1216 12:15:32.107541 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.107639 kubelet[3629]: E1216 12:15:32.107552 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.107899 kubelet[3629]: E1216 12:15:32.107780 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.107899 kubelet[3629]: W1216 12:15:32.107792 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.107899 kubelet[3629]: E1216 12:15:32.107801 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.108063 kubelet[3629]: E1216 12:15:32.108051 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.108252 kubelet[3629]: W1216 12:15:32.108179 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.108252 kubelet[3629]: E1216 12:15:32.108196 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.108722 kubelet[3629]: E1216 12:15:32.108655 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.108722 kubelet[3629]: W1216 12:15:32.108669 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.108722 kubelet[3629]: E1216 12:15:32.108680 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.109633 kubelet[3629]: E1216 12:15:32.109620 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.109892 kubelet[3629]: W1216 12:15:32.109701 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.109892 kubelet[3629]: E1216 12:15:32.109717 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.110099 kubelet[3629]: E1216 12:15:32.110006 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.110099 kubelet[3629]: W1216 12:15:32.110018 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.110099 kubelet[3629]: E1216 12:15:32.110028 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.110231 kubelet[3629]: E1216 12:15:32.110221 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.110478 kubelet[3629]: W1216 12:15:32.110301 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.110478 kubelet[3629]: E1216 12:15:32.110317 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.110615 kubelet[3629]: E1216 12:15:32.110603 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.110855 kubelet[3629]: W1216 12:15:32.110713 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.110855 kubelet[3629]: E1216 12:15:32.110730 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.111081 kubelet[3629]: E1216 12:15:32.111070 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.111224 kubelet[3629]: W1216 12:15:32.111137 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.111224 kubelet[3629]: E1216 12:15:32.111150 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.111483 kubelet[3629]: E1216 12:15:32.111471 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.111637 kubelet[3629]: W1216 12:15:32.111535 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.111637 kubelet[3629]: E1216 12:15:32.111558 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.111838 kubelet[3629]: E1216 12:15:32.111827 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.111984 kubelet[3629]: W1216 12:15:32.111896 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.111984 kubelet[3629]: E1216 12:15:32.111911 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.112118 kubelet[3629]: E1216 12:15:32.112109 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.112233 kubelet[3629]: W1216 12:15:32.112165 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.112233 kubelet[3629]: E1216 12:15:32.112177 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.112671 kubelet[3629]: E1216 12:15:32.112538 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.112671 kubelet[3629]: W1216 12:15:32.112549 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.112671 kubelet[3629]: E1216 12:15:32.112558 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.113203 kubelet[3629]: E1216 12:15:32.113182 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.113203 kubelet[3629]: W1216 12:15:32.113198 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.113296 kubelet[3629]: E1216 12:15:32.113211 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.113984 kubelet[3629]: E1216 12:15:32.113962 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.114443 kubelet[3629]: W1216 12:15:32.113978 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.114483 kubelet[3629]: E1216 12:15:32.114451 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.115420 kubelet[3629]: E1216 12:15:32.114933 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.115420 kubelet[3629]: W1216 12:15:32.115403 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.115502 kubelet[3629]: E1216 12:15:32.115426 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.116100 kubelet[3629]: E1216 12:15:32.115960 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.116100 kubelet[3629]: W1216 12:15:32.115988 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.116100 kubelet[3629]: E1216 12:15:32.116001 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.117032 kubelet[3629]: E1216 12:15:32.117012 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.117032 kubelet[3629]: W1216 12:15:32.117028 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.117117 kubelet[3629]: E1216 12:15:32.117040 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.120053 kubelet[3629]: E1216 12:15:32.120028 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:32.120053 kubelet[3629]: W1216 12:15:32.120046 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:32.120053 kubelet[3629]: E1216 12:15:32.120059 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:32.122924 systemd[1]: Started cri-containerd-32b10c45a1587f1bb2c44a12858dd7682d9b57a0bc1568824e671f821fd31812.scope - libcontainer container 32b10c45a1587f1bb2c44a12858dd7682d9b57a0bc1568824e671f821fd31812. Dec 16 12:15:32.130000 audit: BPF prog-id=180 op=LOAD Dec 16 12:15:32.130000 audit: BPF prog-id=181 op=LOAD Dec 16 12:15:32.130000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4093 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:32.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332623130633435613135383766316262326334346131323835386464 Dec 16 12:15:32.130000 audit: BPF prog-id=181 op=UNLOAD Dec 16 12:15:32.130000 audit[4105]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4093 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:32.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332623130633435613135383766316262326334346131323835386464 Dec 16 12:15:32.130000 audit: BPF prog-id=182 op=LOAD Dec 16 12:15:32.130000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4093 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:32.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332623130633435613135383766316262326334346131323835386464 Dec 16 12:15:32.131000 audit: BPF prog-id=183 op=LOAD Dec 16 12:15:32.131000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4093 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:32.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332623130633435613135383766316262326334346131323835386464 Dec 16 12:15:32.131000 audit: BPF prog-id=183 op=UNLOAD Dec 16 12:15:32.131000 audit[4105]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4093 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:32.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332623130633435613135383766316262326334346131323835386464 Dec 16 12:15:32.131000 audit: BPF prog-id=182 op=UNLOAD Dec 16 12:15:32.131000 audit[4105]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4093 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:32.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332623130633435613135383766316262326334346131323835386464 Dec 16 12:15:32.131000 audit: BPF prog-id=184 op=LOAD Dec 16 12:15:32.131000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4093 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:32.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332623130633435613135383766316262326334346131323835386464 Dec 16 12:15:32.148289 containerd[2077]: time="2025-12-16T12:15:32.148242101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jhbwd,Uid:49b2d360-9637-4758-9da4-648567149f25,Namespace:calico-system,Attempt:0,} returns sandbox id \"32b10c45a1587f1bb2c44a12858dd7682d9b57a0bc1568824e671f821fd31812\"" Dec 16 12:15:33.129038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount986656580.mount: Deactivated successfully. Dec 16 12:15:33.477839 kubelet[3629]: E1216 12:15:33.477025 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lft87" podUID="8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85" Dec 16 12:15:33.634311 containerd[2077]: time="2025-12-16T12:15:33.633964747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:33.636991 containerd[2077]: time="2025-12-16T12:15:33.636942585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31717602" Dec 16 12:15:33.640391 containerd[2077]: time="2025-12-16T12:15:33.640317494Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:33.643892 containerd[2077]: time="2025-12-16T12:15:33.643850760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:33.644430 containerd[2077]: time="2025-12-16T12:15:33.644197843Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.653748602s" Dec 16 12:15:33.644430 containerd[2077]: time="2025-12-16T12:15:33.644423244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 12:15:33.646423 containerd[2077]: time="2025-12-16T12:15:33.646196949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:15:33.664921 containerd[2077]: time="2025-12-16T12:15:33.664890364Z" level=info msg="CreateContainer within sandbox \"23229cd8aef38f1c9fe98f23aaba77ff30226b874e9daa0cc0e76c48315f938d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:15:33.685153 containerd[2077]: time="2025-12-16T12:15:33.685108593Z" level=info msg="Container 43ba489d2274d1213bc51c3aae8323890d92819e263b31b9030fbfc5fc33b843: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:15:33.704737 containerd[2077]: time="2025-12-16T12:15:33.704251162Z" level=info msg="CreateContainer within sandbox \"23229cd8aef38f1c9fe98f23aaba77ff30226b874e9daa0cc0e76c48315f938d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"43ba489d2274d1213bc51c3aae8323890d92819e263b31b9030fbfc5fc33b843\"" Dec 16 12:15:33.706425 containerd[2077]: time="2025-12-16T12:15:33.706370334Z" level=info msg="StartContainer for \"43ba489d2274d1213bc51c3aae8323890d92819e263b31b9030fbfc5fc33b843\"" Dec 16 12:15:33.707581 containerd[2077]: time="2025-12-16T12:15:33.707553954Z" level=info msg="connecting to shim 43ba489d2274d1213bc51c3aae8323890d92819e263b31b9030fbfc5fc33b843" address="unix:///run/containerd/s/6c79ab2069d1785ec46c8372875ef14904afc8251af16052155289de50c16954" protocol=ttrpc version=3 Dec 16 12:15:33.733027 systemd[1]: Started cri-containerd-43ba489d2274d1213bc51c3aae8323890d92819e263b31b9030fbfc5fc33b843.scope - libcontainer container 43ba489d2274d1213bc51c3aae8323890d92819e263b31b9030fbfc5fc33b843. Dec 16 12:15:33.762000 audit: BPF prog-id=185 op=LOAD Dec 16 12:15:33.763000 audit: BPF prog-id=186 op=LOAD Dec 16 12:15:33.763000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=4042 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:33.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433626134383964323237346431323133626335316333616165383332 Dec 16 12:15:33.764000 audit: BPF prog-id=186 op=UNLOAD Dec 16 12:15:33.764000 audit[4168]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4042 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:33.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433626134383964323237346431323133626335316333616165383332 Dec 16 12:15:33.764000 audit: BPF prog-id=187 op=LOAD Dec 16 12:15:33.764000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=4042 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:33.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433626134383964323237346431323133626335316333616165383332 Dec 16 12:15:33.764000 audit: BPF prog-id=188 op=LOAD Dec 16 12:15:33.764000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=4042 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:33.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433626134383964323237346431323133626335316333616165383332 Dec 16 12:15:33.764000 audit: BPF prog-id=188 op=UNLOAD Dec 16 12:15:33.764000 audit[4168]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4042 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:33.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433626134383964323237346431323133626335316333616165383332 Dec 16 12:15:33.764000 audit: BPF prog-id=187 op=UNLOAD Dec 16 12:15:33.764000 audit[4168]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4042 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:33.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433626134383964323237346431323133626335316333616165383332 Dec 16 12:15:33.764000 audit: BPF prog-id=189 op=LOAD Dec 16 12:15:33.764000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=4042 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:33.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433626134383964323237346431323133626335316333616165383332 Dec 16 12:15:33.806471 containerd[2077]: time="2025-12-16T12:15:33.806375298Z" level=info msg="StartContainer for \"43ba489d2274d1213bc51c3aae8323890d92819e263b31b9030fbfc5fc33b843\" returns successfully" Dec 16 12:15:34.586322 kubelet[3629]: I1216 12:15:34.586212 3629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-545686595d-kr8z2" podStartSLOduration=1.930032716 podStartE2EDuration="3.586196377s" podCreationTimestamp="2025-12-16 12:15:31 +0000 UTC" firstStartedPulling="2025-12-16 12:15:31.989700663 +0000 UTC m=+20.590693365" lastFinishedPulling="2025-12-16 12:15:33.645864316 +0000 UTC m=+22.246857026" observedRunningTime="2025-12-16 12:15:34.58600109 +0000 UTC m=+23.186993792" watchObservedRunningTime="2025-12-16 12:15:34.586196377 +0000 UTC m=+23.187189079" Dec 16 12:15:34.619575 kubelet[3629]: E1216 12:15:34.619494 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.619575 kubelet[3629]: W1216 12:15:34.619521 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.619575 kubelet[3629]: E1216 12:15:34.619542 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.620096 kubelet[3629]: E1216 12:15:34.620044 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.620170 kubelet[3629]: W1216 12:15:34.620059 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.620222 kubelet[3629]: E1216 12:15:34.620211 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.620453 kubelet[3629]: E1216 12:15:34.620442 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.620563 kubelet[3629]: W1216 12:15:34.620529 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.620563 kubelet[3629]: E1216 12:15:34.620545 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.620849 kubelet[3629]: E1216 12:15:34.620791 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.620849 kubelet[3629]: W1216 12:15:34.620801 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.620849 kubelet[3629]: E1216 12:15:34.620813 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.621107 kubelet[3629]: E1216 12:15:34.621079 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.621107 kubelet[3629]: W1216 12:15:34.621089 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.621221 kubelet[3629]: E1216 12:15:34.621099 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.621387 kubelet[3629]: E1216 12:15:34.621378 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.621535 kubelet[3629]: W1216 12:15:34.621426 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.621535 kubelet[3629]: E1216 12:15:34.621437 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.621679 kubelet[3629]: E1216 12:15:34.621641 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.621679 kubelet[3629]: W1216 12:15:34.621651 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.621679 kubelet[3629]: E1216 12:15:34.621660 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.622000 kubelet[3629]: E1216 12:15:34.621937 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.622000 kubelet[3629]: W1216 12:15:34.621948 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.622000 kubelet[3629]: E1216 12:15:34.621957 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.622278 kubelet[3629]: E1216 12:15:34.622234 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.622278 kubelet[3629]: W1216 12:15:34.622246 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.622278 kubelet[3629]: E1216 12:15:34.622254 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.622554 kubelet[3629]: E1216 12:15:34.622503 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.622554 kubelet[3629]: W1216 12:15:34.622513 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.622554 kubelet[3629]: E1216 12:15:34.622523 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.622855 kubelet[3629]: E1216 12:15:34.622801 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.622855 kubelet[3629]: W1216 12:15:34.622812 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.622855 kubelet[3629]: E1216 12:15:34.622822 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.623125 kubelet[3629]: E1216 12:15:34.623065 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.623125 kubelet[3629]: W1216 12:15:34.623076 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.623125 kubelet[3629]: E1216 12:15:34.623085 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.623382 kubelet[3629]: E1216 12:15:34.623332 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.623382 kubelet[3629]: W1216 12:15:34.623341 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.623382 kubelet[3629]: E1216 12:15:34.623351 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.623656 kubelet[3629]: E1216 12:15:34.623608 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.623656 kubelet[3629]: W1216 12:15:34.623618 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.623656 kubelet[3629]: E1216 12:15:34.623627 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.624070 kubelet[3629]: E1216 12:15:34.623991 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.624070 kubelet[3629]: W1216 12:15:34.624003 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.624070 kubelet[3629]: E1216 12:15:34.624014 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.629297 kubelet[3629]: E1216 12:15:34.629223 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.629297 kubelet[3629]: W1216 12:15:34.629235 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.629297 kubelet[3629]: E1216 12:15:34.629243 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.629537 kubelet[3629]: E1216 12:15:34.629526 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.629648 kubelet[3629]: W1216 12:15:34.629592 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.629648 kubelet[3629]: E1216 12:15:34.629607 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.629816 kubelet[3629]: E1216 12:15:34.629797 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.629857 kubelet[3629]: W1216 12:15:34.629813 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.629857 kubelet[3629]: E1216 12:15:34.629827 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.629968 kubelet[3629]: E1216 12:15:34.629955 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.629968 kubelet[3629]: W1216 12:15:34.629964 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.630024 kubelet[3629]: E1216 12:15:34.629972 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.630074 kubelet[3629]: E1216 12:15:34.630061 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.630074 kubelet[3629]: W1216 12:15:34.630068 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.630074 kubelet[3629]: E1216 12:15:34.630074 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.630204 kubelet[3629]: E1216 12:15:34.630193 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.630204 kubelet[3629]: W1216 12:15:34.630200 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.630249 kubelet[3629]: E1216 12:15:34.630205 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.630435 kubelet[3629]: E1216 12:15:34.630423 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.630490 kubelet[3629]: W1216 12:15:34.630481 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.630528 kubelet[3629]: E1216 12:15:34.630520 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.630740 kubelet[3629]: E1216 12:15:34.630725 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.630740 kubelet[3629]: W1216 12:15:34.630736 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.630740 kubelet[3629]: E1216 12:15:34.630744 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.630883 kubelet[3629]: E1216 12:15:34.630872 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.630883 kubelet[3629]: W1216 12:15:34.630877 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.630947 kubelet[3629]: E1216 12:15:34.630884 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.630994 kubelet[3629]: E1216 12:15:34.630983 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.630994 kubelet[3629]: W1216 12:15:34.630990 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.631043 kubelet[3629]: E1216 12:15:34.630995 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.631116 kubelet[3629]: E1216 12:15:34.631104 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.631116 kubelet[3629]: W1216 12:15:34.631112 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.631155 kubelet[3629]: E1216 12:15:34.631117 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.631410 kubelet[3629]: E1216 12:15:34.631330 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.631410 kubelet[3629]: W1216 12:15:34.631343 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.631410 kubelet[3629]: E1216 12:15:34.631352 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.631622 kubelet[3629]: E1216 12:15:34.631612 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.631691 kubelet[3629]: W1216 12:15:34.631681 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.631747 kubelet[3629]: E1216 12:15:34.631735 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.631985 kubelet[3629]: E1216 12:15:34.631955 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.631985 kubelet[3629]: W1216 12:15:34.631965 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.631985 kubelet[3629]: E1216 12:15:34.631973 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.632290 kubelet[3629]: E1216 12:15:34.632221 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.632290 kubelet[3629]: W1216 12:15:34.632231 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.632290 kubelet[3629]: E1216 12:15:34.632240 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.632564 kubelet[3629]: E1216 12:15:34.632509 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.632564 kubelet[3629]: W1216 12:15:34.632520 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.632564 kubelet[3629]: E1216 12:15:34.632529 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.632810 kubelet[3629]: E1216 12:15:34.632775 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.632810 kubelet[3629]: W1216 12:15:34.632785 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.632810 kubelet[3629]: E1216 12:15:34.632794 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.633116 kubelet[3629]: E1216 12:15:34.633081 3629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:34.633116 kubelet[3629]: W1216 12:15:34.633091 3629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:34.633116 kubelet[3629]: E1216 12:15:34.633099 3629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:34.828299 containerd[2077]: time="2025-12-16T12:15:34.828241062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:34.831664 containerd[2077]: time="2025-12-16T12:15:34.831592193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:34.834863 containerd[2077]: time="2025-12-16T12:15:34.834833692Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:34.838483 containerd[2077]: time="2025-12-16T12:15:34.838400752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:34.839028 containerd[2077]: time="2025-12-16T12:15:34.838997630Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.192774615s" Dec 16 12:15:34.839067 containerd[2077]: time="2025-12-16T12:15:34.839030209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 12:15:34.847734 containerd[2077]: time="2025-12-16T12:15:34.847691543Z" level=info msg="CreateContainer within sandbox \"32b10c45a1587f1bb2c44a12858dd7682d9b57a0bc1568824e671f821fd31812\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:15:34.871375 containerd[2077]: time="2025-12-16T12:15:34.871198794Z" level=info msg="Container b8fcf15c9b26bdd9d8f2a253e5058a00bf9c212d829680979b83b76431ca8da3: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:15:34.888124 containerd[2077]: time="2025-12-16T12:15:34.888089885Z" level=info msg="CreateContainer within sandbox \"32b10c45a1587f1bb2c44a12858dd7682d9b57a0bc1568824e671f821fd31812\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b8fcf15c9b26bdd9d8f2a253e5058a00bf9c212d829680979b83b76431ca8da3\"" Dec 16 12:15:34.889194 containerd[2077]: time="2025-12-16T12:15:34.889024126Z" level=info msg="StartContainer for \"b8fcf15c9b26bdd9d8f2a253e5058a00bf9c212d829680979b83b76431ca8da3\"" Dec 16 12:15:34.890442 containerd[2077]: time="2025-12-16T12:15:34.890412617Z" level=info msg="connecting to shim b8fcf15c9b26bdd9d8f2a253e5058a00bf9c212d829680979b83b76431ca8da3" address="unix:///run/containerd/s/a6987e319db5cb52b4a46f03281cde03578d01c5c997b939501eaca305e63c57" protocol=ttrpc version=3 Dec 16 12:15:34.913920 systemd[1]: Started cri-containerd-b8fcf15c9b26bdd9d8f2a253e5058a00bf9c212d829680979b83b76431ca8da3.scope - libcontainer container b8fcf15c9b26bdd9d8f2a253e5058a00bf9c212d829680979b83b76431ca8da3. Dec 16 12:15:34.953000 audit: BPF prog-id=190 op=LOAD Dec 16 12:15:34.953000 audit[4244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4093 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:34.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238666366313563396232366264643964386632613235336535303538 Dec 16 12:15:34.953000 audit: BPF prog-id=191 op=LOAD Dec 16 12:15:34.953000 audit[4244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4093 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:34.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238666366313563396232366264643964386632613235336535303538 Dec 16 12:15:34.953000 audit: BPF prog-id=191 op=UNLOAD Dec 16 12:15:34.953000 audit[4244]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4093 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:34.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238666366313563396232366264643964386632613235336535303538 Dec 16 12:15:34.953000 audit: BPF prog-id=190 op=UNLOAD Dec 16 12:15:34.953000 audit[4244]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4093 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:34.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238666366313563396232366264643964386632613235336535303538 Dec 16 12:15:34.953000 audit: BPF prog-id=192 op=LOAD Dec 16 12:15:34.953000 audit[4244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4093 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:34.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238666366313563396232366264643964386632613235336535303538 Dec 16 12:15:34.983211 containerd[2077]: time="2025-12-16T12:15:34.983038506Z" level=info msg="StartContainer for \"b8fcf15c9b26bdd9d8f2a253e5058a00bf9c212d829680979b83b76431ca8da3\" returns successfully" Dec 16 12:15:34.989560 systemd[1]: cri-containerd-b8fcf15c9b26bdd9d8f2a253e5058a00bf9c212d829680979b83b76431ca8da3.scope: Deactivated successfully. Dec 16 12:15:34.991000 audit: BPF prog-id=192 op=UNLOAD Dec 16 12:15:34.998534 containerd[2077]: time="2025-12-16T12:15:34.993347808Z" level=info msg="received container exit event container_id:\"b8fcf15c9b26bdd9d8f2a253e5058a00bf9c212d829680979b83b76431ca8da3\" id:\"b8fcf15c9b26bdd9d8f2a253e5058a00bf9c212d829680979b83b76431ca8da3\" pid:4257 exited_at:{seconds:1765887334 nanos:992951225}" Dec 16 12:15:35.010772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8fcf15c9b26bdd9d8f2a253e5058a00bf9c212d829680979b83b76431ca8da3-rootfs.mount: Deactivated successfully. Dec 16 12:15:35.477029 kubelet[3629]: E1216 12:15:35.476968 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lft87" podUID="8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85" Dec 16 12:15:35.573867 kubelet[3629]: I1216 12:15:35.573832 3629 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:15:36.578865 containerd[2077]: time="2025-12-16T12:15:36.578632943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:15:37.477013 kubelet[3629]: E1216 12:15:37.476967 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lft87" podUID="8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85" Dec 16 12:15:38.763685 containerd[2077]: time="2025-12-16T12:15:38.763637934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:38.766429 containerd[2077]: time="2025-12-16T12:15:38.766383351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 16 12:15:38.770090 containerd[2077]: time="2025-12-16T12:15:38.770044317Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:38.774349 containerd[2077]: time="2025-12-16T12:15:38.774288729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:38.774892 containerd[2077]: time="2025-12-16T12:15:38.774656908Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.195848215s" Dec 16 12:15:38.774892 containerd[2077]: time="2025-12-16T12:15:38.774682734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 12:15:38.782862 containerd[2077]: time="2025-12-16T12:15:38.782832152Z" level=info msg="CreateContainer within sandbox \"32b10c45a1587f1bb2c44a12858dd7682d9b57a0bc1568824e671f821fd31812\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:15:38.802797 containerd[2077]: time="2025-12-16T12:15:38.800553039Z" level=info msg="Container 06970b4cb70157c1c646b719c4a52e98c0090f90161395565ae31bf0b8300b6a: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:15:38.817171 containerd[2077]: time="2025-12-16T12:15:38.817116379Z" level=info msg="CreateContainer within sandbox \"32b10c45a1587f1bb2c44a12858dd7682d9b57a0bc1568824e671f821fd31812\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"06970b4cb70157c1c646b719c4a52e98c0090f90161395565ae31bf0b8300b6a\"" Dec 16 12:15:38.818038 containerd[2077]: time="2025-12-16T12:15:38.817916950Z" level=info msg="StartContainer for \"06970b4cb70157c1c646b719c4a52e98c0090f90161395565ae31bf0b8300b6a\"" Dec 16 12:15:38.819455 containerd[2077]: time="2025-12-16T12:15:38.819418786Z" level=info msg="connecting to shim 06970b4cb70157c1c646b719c4a52e98c0090f90161395565ae31bf0b8300b6a" address="unix:///run/containerd/s/a6987e319db5cb52b4a46f03281cde03578d01c5c997b939501eaca305e63c57" protocol=ttrpc version=3 Dec 16 12:15:38.840924 systemd[1]: Started cri-containerd-06970b4cb70157c1c646b719c4a52e98c0090f90161395565ae31bf0b8300b6a.scope - libcontainer container 06970b4cb70157c1c646b719c4a52e98c0090f90161395565ae31bf0b8300b6a. Dec 16 12:15:38.900786 kernel: kauditd_printk_skb: 84 callbacks suppressed Dec 16 12:15:38.900924 kernel: audit: type=1334 audit(1765887338.896:592): prog-id=193 op=LOAD Dec 16 12:15:38.896000 audit: BPF prog-id=193 op=LOAD Dec 16 12:15:38.896000 audit[4302]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4093 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:38.921713 kernel: audit: type=1300 audit(1765887338.896:592): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4093 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:38.896000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036393730623463623730313537633163363436623731396334613532 Dec 16 12:15:38.940349 kernel: audit: type=1327 audit(1765887338.896:592): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036393730623463623730313537633163363436623731396334613532 Dec 16 12:15:38.899000 audit: BPF prog-id=194 op=LOAD Dec 16 12:15:38.946639 kernel: audit: type=1334 audit(1765887338.899:593): prog-id=194 op=LOAD Dec 16 12:15:38.899000 audit[4302]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4093 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:38.964047 kernel: audit: type=1300 audit(1765887338.899:593): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4093 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:38.899000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036393730623463623730313537633163363436623731396334613532 Dec 16 12:15:38.980595 kernel: audit: type=1327 audit(1765887338.899:593): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036393730623463623730313537633163363436623731396334613532 Dec 16 12:15:38.903000 audit: BPF prog-id=194 op=UNLOAD Dec 16 12:15:38.986820 kernel: audit: type=1334 audit(1765887338.903:594): prog-id=194 op=UNLOAD Dec 16 12:15:38.988097 kernel: audit: type=1300 audit(1765887338.903:594): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4093 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:38.903000 audit[4302]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4093 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:38.991363 containerd[2077]: time="2025-12-16T12:15:38.991320751Z" level=info msg="StartContainer for \"06970b4cb70157c1c646b719c4a52e98c0090f90161395565ae31bf0b8300b6a\" returns successfully" Dec 16 12:15:38.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036393730623463623730313537633163363436623731396334613532 Dec 16 12:15:39.021101 kernel: audit: type=1327 audit(1765887338.903:594): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036393730623463623730313537633163363436623731396334613532 Dec 16 12:15:38.903000 audit: BPF prog-id=193 op=UNLOAD Dec 16 12:15:39.027185 kernel: audit: type=1334 audit(1765887338.903:595): prog-id=193 op=UNLOAD Dec 16 12:15:38.903000 audit[4302]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4093 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:38.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036393730623463623730313537633163363436623731396334613532 Dec 16 12:15:38.903000 audit: BPF prog-id=195 op=LOAD Dec 16 12:15:38.903000 audit[4302]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4093 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:38.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036393730623463623730313537633163363436623731396334613532 Dec 16 12:15:39.476677 kubelet[3629]: E1216 12:15:39.476217 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lft87" podUID="8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85" Dec 16 12:15:40.095251 systemd[1]: cri-containerd-06970b4cb70157c1c646b719c4a52e98c0090f90161395565ae31bf0b8300b6a.scope: Deactivated successfully. Dec 16 12:15:40.095539 systemd[1]: cri-containerd-06970b4cb70157c1c646b719c4a52e98c0090f90161395565ae31bf0b8300b6a.scope: Consumed 314ms CPU time, 186.9M memory peak, 165.9M written to disk. Dec 16 12:15:40.097313 containerd[2077]: time="2025-12-16T12:15:40.097276819Z" level=info msg="received container exit event container_id:\"06970b4cb70157c1c646b719c4a52e98c0090f90161395565ae31bf0b8300b6a\" id:\"06970b4cb70157c1c646b719c4a52e98c0090f90161395565ae31bf0b8300b6a\" pid:4315 exited_at:{seconds:1765887340 nanos:96452525}" Dec 16 12:15:40.097000 audit: BPF prog-id=195 op=UNLOAD Dec 16 12:15:40.115909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06970b4cb70157c1c646b719c4a52e98c0090f90161395565ae31bf0b8300b6a-rootfs.mount: Deactivated successfully. Dec 16 12:15:40.183682 kubelet[3629]: I1216 12:15:40.183605 3629 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 12:15:40.952159 systemd[1]: Created slice kubepods-burstable-pod306c4b20_3806_46c9_b86b_305f578267e9.slice - libcontainer container kubepods-burstable-pod306c4b20_3806_46c9_b86b_305f578267e9.slice. Dec 16 12:15:40.964043 systemd[1]: Created slice kubepods-besteffort-pod4517fd6f_78fe_47ae_9d6d_f6ee6d3ebba7.slice - libcontainer container kubepods-besteffort-pod4517fd6f_78fe_47ae_9d6d_f6ee6d3ebba7.slice. Dec 16 12:15:40.968674 kubelet[3629]: I1216 12:15:40.968362 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7-tigera-ca-bundle\") pod \"calico-kube-controllers-6574577cd4-cw4js\" (UID: \"4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7\") " pod="calico-system/calico-kube-controllers-6574577cd4-cw4js" Dec 16 12:15:40.968674 kubelet[3629]: I1216 12:15:40.968394 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm5s5\" (UniqueName: \"kubernetes.io/projected/4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7-kube-api-access-nm5s5\") pod \"calico-kube-controllers-6574577cd4-cw4js\" (UID: \"4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7\") " pod="calico-system/calico-kube-controllers-6574577cd4-cw4js" Dec 16 12:15:40.968674 kubelet[3629]: I1216 12:15:40.968407 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/306c4b20-3806-46c9-b86b-305f578267e9-config-volume\") pod \"coredns-66bc5c9577-8x7w7\" (UID: \"306c4b20-3806-46c9-b86b-305f578267e9\") " pod="kube-system/coredns-66bc5c9577-8x7w7" Dec 16 12:15:40.968674 kubelet[3629]: I1216 12:15:40.968416 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdxk8\" (UniqueName: \"kubernetes.io/projected/306c4b20-3806-46c9-b86b-305f578267e9-kube-api-access-gdxk8\") pod \"coredns-66bc5c9577-8x7w7\" (UID: \"306c4b20-3806-46c9-b86b-305f578267e9\") " pod="kube-system/coredns-66bc5c9577-8x7w7" Dec 16 12:15:40.971319 systemd[1]: Created slice kubepods-burstable-pod1578eaf6_7b31_404e_823c_f6b50cad689e.slice - libcontainer container kubepods-burstable-pod1578eaf6_7b31_404e_823c_f6b50cad689e.slice. Dec 16 12:15:40.980813 systemd[1]: Created slice kubepods-besteffort-pod8ad17ca2_8ff1_4ee9_bc62_9c8a663b9e85.slice - libcontainer container kubepods-besteffort-pod8ad17ca2_8ff1_4ee9_bc62_9c8a663b9e85.slice. Dec 16 12:15:40.987010 systemd[1]: Created slice kubepods-besteffort-poda081eac5_c790_4263_a08c_1af1e10fce20.slice - libcontainer container kubepods-besteffort-poda081eac5_c790_4263_a08c_1af1e10fce20.slice. Dec 16 12:15:40.991915 containerd[2077]: time="2025-12-16T12:15:40.991741246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lft87,Uid:8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85,Namespace:calico-system,Attempt:0,}" Dec 16 12:15:40.994300 systemd[1]: Created slice kubepods-besteffort-pod934998e2_1bd4_4baf_a9e2_cc5a0c414cea.slice - libcontainer container kubepods-besteffort-pod934998e2_1bd4_4baf_a9e2_cc5a0c414cea.slice. Dec 16 12:15:40.999367 systemd[1]: Created slice kubepods-besteffort-podc207d796_3dc3_47ed_bf51_f866d808dda0.slice - libcontainer container kubepods-besteffort-podc207d796_3dc3_47ed_bf51_f866d808dda0.slice. Dec 16 12:15:41.007004 systemd[1]: Created slice kubepods-besteffort-pod42e08362_84ba_4be1_b9a5_3a3391796c9d.slice - libcontainer container kubepods-besteffort-pod42e08362_84ba_4be1_b9a5_3a3391796c9d.slice. Dec 16 12:15:41.055032 containerd[2077]: time="2025-12-16T12:15:41.054968569Z" level=error msg="Failed to destroy network for sandbox \"5fad913dffef65c09171ad7ffa28629cc68a0f5e4c964ae8a0efd18c4191a87c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.056703 systemd[1]: run-netns-cni\x2d0360081f\x2d88e3\x2d4d2a\x2d542e\x2d70f171c89971.mount: Deactivated successfully. Dec 16 12:15:41.063494 containerd[2077]: time="2025-12-16T12:15:41.063404079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lft87,Uid:8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fad913dffef65c09171ad7ffa28629cc68a0f5e4c964ae8a0efd18c4191a87c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.063972 kubelet[3629]: E1216 12:15:41.063850 3629 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fad913dffef65c09171ad7ffa28629cc68a0f5e4c964ae8a0efd18c4191a87c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.063972 kubelet[3629]: E1216 12:15:41.063932 3629 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fad913dffef65c09171ad7ffa28629cc68a0f5e4c964ae8a0efd18c4191a87c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lft87" Dec 16 12:15:41.063972 kubelet[3629]: E1216 12:15:41.063947 3629 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fad913dffef65c09171ad7ffa28629cc68a0f5e4c964ae8a0efd18c4191a87c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lft87" Dec 16 12:15:41.064223 kubelet[3629]: E1216 12:15:41.064161 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lft87_calico-system(8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lft87_calico-system(8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5fad913dffef65c09171ad7ffa28629cc68a0f5e4c964ae8a0efd18c4191a87c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lft87" podUID="8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85" Dec 16 12:15:41.069040 kubelet[3629]: I1216 12:15:41.069013 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1578eaf6-7b31-404e-823c-f6b50cad689e-config-volume\") pod \"coredns-66bc5c9577-d7m9p\" (UID: \"1578eaf6-7b31-404e-823c-f6b50cad689e\") " pod="kube-system/coredns-66bc5c9577-d7m9p" Dec 16 12:15:41.069215 kubelet[3629]: I1216 12:15:41.069203 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42e08362-84ba-4be1-b9a5-3a3391796c9d-config\") pod \"goldmane-7c778bb748-xnsjr\" (UID: \"42e08362-84ba-4be1-b9a5-3a3391796c9d\") " pod="calico-system/goldmane-7c778bb748-xnsjr" Dec 16 12:15:41.070066 kubelet[3629]: I1216 12:15:41.069995 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/42e08362-84ba-4be1-b9a5-3a3391796c9d-goldmane-key-pair\") pod \"goldmane-7c778bb748-xnsjr\" (UID: \"42e08362-84ba-4be1-b9a5-3a3391796c9d\") " pod="calico-system/goldmane-7c778bb748-xnsjr" Dec 16 12:15:41.070324 kubelet[3629]: I1216 12:15:41.070209 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a081eac5-c790-4263-a08c-1af1e10fce20-calico-apiserver-certs\") pod \"calico-apiserver-65f8f4c6db-glr2q\" (UID: \"a081eac5-c790-4263-a08c-1af1e10fce20\") " pod="calico-apiserver/calico-apiserver-65f8f4c6db-glr2q" Dec 16 12:15:41.070324 kubelet[3629]: I1216 12:15:41.070242 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42e08362-84ba-4be1-b9a5-3a3391796c9d-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-xnsjr\" (UID: \"42e08362-84ba-4be1-b9a5-3a3391796c9d\") " pod="calico-system/goldmane-7c778bb748-xnsjr" Dec 16 12:15:41.071678 kubelet[3629]: I1216 12:15:41.071655 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d9cq\" (UniqueName: \"kubernetes.io/projected/c207d796-3dc3-47ed-bf51-f866d808dda0-kube-api-access-5d9cq\") pod \"whisker-56b7cfb974-fm6jm\" (UID: \"c207d796-3dc3-47ed-bf51-f866d808dda0\") " pod="calico-system/whisker-56b7cfb974-fm6jm" Dec 16 12:15:41.072667 kubelet[3629]: I1216 12:15:41.072620 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/934998e2-1bd4-4baf-a9e2-cc5a0c414cea-calico-apiserver-certs\") pod \"calico-apiserver-65f8f4c6db-nsvdz\" (UID: \"934998e2-1bd4-4baf-a9e2-cc5a0c414cea\") " pod="calico-apiserver/calico-apiserver-65f8f4c6db-nsvdz" Dec 16 12:15:41.072667 kubelet[3629]: I1216 12:15:41.072644 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wl9p\" (UniqueName: \"kubernetes.io/projected/934998e2-1bd4-4baf-a9e2-cc5a0c414cea-kube-api-access-2wl9p\") pod \"calico-apiserver-65f8f4c6db-nsvdz\" (UID: \"934998e2-1bd4-4baf-a9e2-cc5a0c414cea\") " pod="calico-apiserver/calico-apiserver-65f8f4c6db-nsvdz" Dec 16 12:15:41.072667 kubelet[3629]: I1216 12:15:41.072727 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdx4c\" (UniqueName: \"kubernetes.io/projected/42e08362-84ba-4be1-b9a5-3a3391796c9d-kube-api-access-tdx4c\") pod \"goldmane-7c778bb748-xnsjr\" (UID: \"42e08362-84ba-4be1-b9a5-3a3391796c9d\") " pod="calico-system/goldmane-7c778bb748-xnsjr" Dec 16 12:15:41.072667 kubelet[3629]: I1216 12:15:41.072741 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c207d796-3dc3-47ed-bf51-f866d808dda0-whisker-backend-key-pair\") pod \"whisker-56b7cfb974-fm6jm\" (UID: \"c207d796-3dc3-47ed-bf51-f866d808dda0\") " pod="calico-system/whisker-56b7cfb974-fm6jm" Dec 16 12:15:41.072667 kubelet[3629]: I1216 12:15:41.072773 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwbbw\" (UniqueName: \"kubernetes.io/projected/a081eac5-c790-4263-a08c-1af1e10fce20-kube-api-access-jwbbw\") pod \"calico-apiserver-65f8f4c6db-glr2q\" (UID: \"a081eac5-c790-4263-a08c-1af1e10fce20\") " pod="calico-apiserver/calico-apiserver-65f8f4c6db-glr2q" Dec 16 12:15:41.072937 kubelet[3629]: I1216 12:15:41.072831 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnsh4\" (UniqueName: \"kubernetes.io/projected/1578eaf6-7b31-404e-823c-f6b50cad689e-kube-api-access-lnsh4\") pod \"coredns-66bc5c9577-d7m9p\" (UID: \"1578eaf6-7b31-404e-823c-f6b50cad689e\") " pod="kube-system/coredns-66bc5c9577-d7m9p" Dec 16 12:15:41.073070 kubelet[3629]: I1216 12:15:41.073045 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c207d796-3dc3-47ed-bf51-f866d808dda0-whisker-ca-bundle\") pod \"whisker-56b7cfb974-fm6jm\" (UID: \"c207d796-3dc3-47ed-bf51-f866d808dda0\") " pod="calico-system/whisker-56b7cfb974-fm6jm" Dec 16 12:15:41.263533 containerd[2077]: time="2025-12-16T12:15:41.262858898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8x7w7,Uid:306c4b20-3806-46c9-b86b-305f578267e9,Namespace:kube-system,Attempt:0,}" Dec 16 12:15:41.272571 containerd[2077]: time="2025-12-16T12:15:41.272409784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6574577cd4-cw4js,Uid:4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7,Namespace:calico-system,Attempt:0,}" Dec 16 12:15:41.288687 containerd[2077]: time="2025-12-16T12:15:41.288650662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d7m9p,Uid:1578eaf6-7b31-404e-823c-f6b50cad689e,Namespace:kube-system,Attempt:0,}" Dec 16 12:15:41.294829 containerd[2077]: time="2025-12-16T12:15:41.294801031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f8f4c6db-glr2q,Uid:a081eac5-c790-4263-a08c-1af1e10fce20,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:15:41.302329 containerd[2077]: time="2025-12-16T12:15:41.302289036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f8f4c6db-nsvdz,Uid:934998e2-1bd4-4baf-a9e2-cc5a0c414cea,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:15:41.311461 containerd[2077]: time="2025-12-16T12:15:41.311425678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56b7cfb974-fm6jm,Uid:c207d796-3dc3-47ed-bf51-f866d808dda0,Namespace:calico-system,Attempt:0,}" Dec 16 12:15:41.320661 containerd[2077]: time="2025-12-16T12:15:41.320622998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xnsjr,Uid:42e08362-84ba-4be1-b9a5-3a3391796c9d,Namespace:calico-system,Attempt:0,}" Dec 16 12:15:41.330292 containerd[2077]: time="2025-12-16T12:15:41.330254075Z" level=error msg="Failed to destroy network for sandbox \"7127756f3e4eb2d5eaf924ec824bea868b0583271530120e81319f3cc670a4eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.342446 containerd[2077]: time="2025-12-16T12:15:41.342387760Z" level=error msg="Failed to destroy network for sandbox \"7b2fc2a2bb107db269ae8a34d6658c7edfd321c41b8f910184a38c5ba3d3ab46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.371182 containerd[2077]: time="2025-12-16T12:15:41.370527118Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8x7w7,Uid:306c4b20-3806-46c9-b86b-305f578267e9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7127756f3e4eb2d5eaf924ec824bea868b0583271530120e81319f3cc670a4eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.372027 kubelet[3629]: E1216 12:15:41.370751 3629 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7127756f3e4eb2d5eaf924ec824bea868b0583271530120e81319f3cc670a4eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.372027 kubelet[3629]: E1216 12:15:41.370817 3629 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7127756f3e4eb2d5eaf924ec824bea868b0583271530120e81319f3cc670a4eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8x7w7" Dec 16 12:15:41.372027 kubelet[3629]: E1216 12:15:41.370833 3629 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7127756f3e4eb2d5eaf924ec824bea868b0583271530120e81319f3cc670a4eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8x7w7" Dec 16 12:15:41.372119 kubelet[3629]: E1216 12:15:41.370882 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-8x7w7_kube-system(306c4b20-3806-46c9-b86b-305f578267e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-8x7w7_kube-system(306c4b20-3806-46c9-b86b-305f578267e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7127756f3e4eb2d5eaf924ec824bea868b0583271530120e81319f3cc670a4eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-8x7w7" podUID="306c4b20-3806-46c9-b86b-305f578267e9" Dec 16 12:15:41.383730 containerd[2077]: time="2025-12-16T12:15:41.382438648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6574577cd4-cw4js,Uid:4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b2fc2a2bb107db269ae8a34d6658c7edfd321c41b8f910184a38c5ba3d3ab46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.383884 kubelet[3629]: E1216 12:15:41.382648 3629 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b2fc2a2bb107db269ae8a34d6658c7edfd321c41b8f910184a38c5ba3d3ab46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.383884 kubelet[3629]: E1216 12:15:41.382705 3629 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b2fc2a2bb107db269ae8a34d6658c7edfd321c41b8f910184a38c5ba3d3ab46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6574577cd4-cw4js" Dec 16 12:15:41.383884 kubelet[3629]: E1216 12:15:41.382722 3629 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b2fc2a2bb107db269ae8a34d6658c7edfd321c41b8f910184a38c5ba3d3ab46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6574577cd4-cw4js" Dec 16 12:15:41.383977 kubelet[3629]: E1216 12:15:41.382785 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6574577cd4-cw4js_calico-system(4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6574577cd4-cw4js_calico-system(4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b2fc2a2bb107db269ae8a34d6658c7edfd321c41b8f910184a38c5ba3d3ab46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6574577cd4-cw4js" podUID="4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7" Dec 16 12:15:41.398975 containerd[2077]: time="2025-12-16T12:15:41.398925867Z" level=error msg="Failed to destroy network for sandbox \"73391159f6e384b8bf63990a3fe94665b32a19f906d0295016e8bc22b963a4b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.407023 containerd[2077]: time="2025-12-16T12:15:41.406865959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d7m9p,Uid:1578eaf6-7b31-404e-823c-f6b50cad689e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73391159f6e384b8bf63990a3fe94665b32a19f906d0295016e8bc22b963a4b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.408262 kubelet[3629]: E1216 12:15:41.407420 3629 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73391159f6e384b8bf63990a3fe94665b32a19f906d0295016e8bc22b963a4b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.408262 kubelet[3629]: E1216 12:15:41.407476 3629 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73391159f6e384b8bf63990a3fe94665b32a19f906d0295016e8bc22b963a4b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-d7m9p" Dec 16 12:15:41.408262 kubelet[3629]: E1216 12:15:41.407511 3629 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73391159f6e384b8bf63990a3fe94665b32a19f906d0295016e8bc22b963a4b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-d7m9p" Dec 16 12:15:41.408409 kubelet[3629]: E1216 12:15:41.407552 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-d7m9p_kube-system(1578eaf6-7b31-404e-823c-f6b50cad689e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-d7m9p_kube-system(1578eaf6-7b31-404e-823c-f6b50cad689e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73391159f6e384b8bf63990a3fe94665b32a19f906d0295016e8bc22b963a4b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-d7m9p" podUID="1578eaf6-7b31-404e-823c-f6b50cad689e" Dec 16 12:15:41.427967 containerd[2077]: time="2025-12-16T12:15:41.427922675Z" level=error msg="Failed to destroy network for sandbox \"dd1163f8ecc9617f85688e7cc8728cd78935546ff5b31a142027250515235372\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.432547 containerd[2077]: time="2025-12-16T12:15:41.432461770Z" level=error msg="Failed to destroy network for sandbox \"7864234951abb6fda7c7972cf1c8f16becc41f45fa8966f6febbc7b6cbb8e295\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.435651 containerd[2077]: time="2025-12-16T12:15:41.435607849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f8f4c6db-glr2q,Uid:a081eac5-c790-4263-a08c-1af1e10fce20,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd1163f8ecc9617f85688e7cc8728cd78935546ff5b31a142027250515235372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.436782 kubelet[3629]: E1216 12:15:41.436351 3629 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd1163f8ecc9617f85688e7cc8728cd78935546ff5b31a142027250515235372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.436782 kubelet[3629]: E1216 12:15:41.436480 3629 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd1163f8ecc9617f85688e7cc8728cd78935546ff5b31a142027250515235372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65f8f4c6db-glr2q" Dec 16 12:15:41.436782 kubelet[3629]: E1216 12:15:41.436498 3629 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd1163f8ecc9617f85688e7cc8728cd78935546ff5b31a142027250515235372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65f8f4c6db-glr2q" Dec 16 12:15:41.436907 kubelet[3629]: E1216 12:15:41.436873 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65f8f4c6db-glr2q_calico-apiserver(a081eac5-c790-4263-a08c-1af1e10fce20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65f8f4c6db-glr2q_calico-apiserver(a081eac5-c790-4263-a08c-1af1e10fce20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd1163f8ecc9617f85688e7cc8728cd78935546ff5b31a142027250515235372\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-glr2q" podUID="a081eac5-c790-4263-a08c-1af1e10fce20" Dec 16 12:15:41.442029 containerd[2077]: time="2025-12-16T12:15:41.441991726Z" level=error msg="Failed to destroy network for sandbox \"b8dbaf74d27de0ec6bef2a1eeb21d86bf821ff2d27c5d5b9bed03c933db9f621\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.443873 containerd[2077]: time="2025-12-16T12:15:41.443771584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f8f4c6db-nsvdz,Uid:934998e2-1bd4-4baf-a9e2-cc5a0c414cea,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7864234951abb6fda7c7972cf1c8f16becc41f45fa8966f6febbc7b6cbb8e295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.443992 kubelet[3629]: E1216 12:15:41.443958 3629 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7864234951abb6fda7c7972cf1c8f16becc41f45fa8966f6febbc7b6cbb8e295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.444040 kubelet[3629]: E1216 12:15:41.443997 3629 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7864234951abb6fda7c7972cf1c8f16becc41f45fa8966f6febbc7b6cbb8e295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65f8f4c6db-nsvdz" Dec 16 12:15:41.444040 kubelet[3629]: E1216 12:15:41.444011 3629 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7864234951abb6fda7c7972cf1c8f16becc41f45fa8966f6febbc7b6cbb8e295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65f8f4c6db-nsvdz" Dec 16 12:15:41.444089 kubelet[3629]: E1216 12:15:41.444054 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65f8f4c6db-nsvdz_calico-apiserver(934998e2-1bd4-4baf-a9e2-cc5a0c414cea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65f8f4c6db-nsvdz_calico-apiserver(934998e2-1bd4-4baf-a9e2-cc5a0c414cea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7864234951abb6fda7c7972cf1c8f16becc41f45fa8966f6febbc7b6cbb8e295\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-nsvdz" podUID="934998e2-1bd4-4baf-a9e2-cc5a0c414cea" Dec 16 12:15:41.450742 containerd[2077]: time="2025-12-16T12:15:41.450674138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56b7cfb974-fm6jm,Uid:c207d796-3dc3-47ed-bf51-f866d808dda0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8dbaf74d27de0ec6bef2a1eeb21d86bf821ff2d27c5d5b9bed03c933db9f621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.452373 kubelet[3629]: E1216 12:15:41.452069 3629 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8dbaf74d27de0ec6bef2a1eeb21d86bf821ff2d27c5d5b9bed03c933db9f621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.452373 kubelet[3629]: E1216 12:15:41.452107 3629 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8dbaf74d27de0ec6bef2a1eeb21d86bf821ff2d27c5d5b9bed03c933db9f621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56b7cfb974-fm6jm" Dec 16 12:15:41.452373 kubelet[3629]: E1216 12:15:41.452119 3629 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8dbaf74d27de0ec6bef2a1eeb21d86bf821ff2d27c5d5b9bed03c933db9f621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56b7cfb974-fm6jm" Dec 16 12:15:41.452496 kubelet[3629]: E1216 12:15:41.452157 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-56b7cfb974-fm6jm_calico-system(c207d796-3dc3-47ed-bf51-f866d808dda0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-56b7cfb974-fm6jm_calico-system(c207d796-3dc3-47ed-bf51-f866d808dda0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8dbaf74d27de0ec6bef2a1eeb21d86bf821ff2d27c5d5b9bed03c933db9f621\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56b7cfb974-fm6jm" podUID="c207d796-3dc3-47ed-bf51-f866d808dda0" Dec 16 12:15:41.454637 containerd[2077]: time="2025-12-16T12:15:41.454540719Z" level=error msg="Failed to destroy network for sandbox \"60f4f1c2af6235c13c89ff7e0ec182c65714997bef6dc00ede77c57fcf368389\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.462515 containerd[2077]: time="2025-12-16T12:15:41.462432582Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xnsjr,Uid:42e08362-84ba-4be1-b9a5-3a3391796c9d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60f4f1c2af6235c13c89ff7e0ec182c65714997bef6dc00ede77c57fcf368389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.462871 kubelet[3629]: E1216 12:15:41.462638 3629 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60f4f1c2af6235c13c89ff7e0ec182c65714997bef6dc00ede77c57fcf368389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:41.462871 kubelet[3629]: E1216 12:15:41.462688 3629 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60f4f1c2af6235c13c89ff7e0ec182c65714997bef6dc00ede77c57fcf368389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-xnsjr" Dec 16 12:15:41.462871 kubelet[3629]: E1216 12:15:41.462702 3629 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60f4f1c2af6235c13c89ff7e0ec182c65714997bef6dc00ede77c57fcf368389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-xnsjr" Dec 16 12:15:41.463045 kubelet[3629]: E1216 12:15:41.462745 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-xnsjr_calico-system(42e08362-84ba-4be1-b9a5-3a3391796c9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-xnsjr_calico-system(42e08362-84ba-4be1-b9a5-3a3391796c9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60f4f1c2af6235c13c89ff7e0ec182c65714997bef6dc00ede77c57fcf368389\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-xnsjr" podUID="42e08362-84ba-4be1-b9a5-3a3391796c9d" Dec 16 12:15:41.596013 containerd[2077]: time="2025-12-16T12:15:41.594875504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:15:45.320498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount934008818.mount: Deactivated successfully. Dec 16 12:15:46.103681 containerd[2077]: time="2025-12-16T12:15:46.103619331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:46.106391 containerd[2077]: time="2025-12-16T12:15:46.106272367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 16 12:15:46.110406 containerd[2077]: time="2025-12-16T12:15:46.109784134Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:46.113579 containerd[2077]: time="2025-12-16T12:15:46.113532296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:46.114015 containerd[2077]: time="2025-12-16T12:15:46.113984263Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.519067628s" Dec 16 12:15:46.114119 containerd[2077]: time="2025-12-16T12:15:46.114103729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 12:15:46.130185 containerd[2077]: time="2025-12-16T12:15:46.130147119Z" level=info msg="CreateContainer within sandbox \"32b10c45a1587f1bb2c44a12858dd7682d9b57a0bc1568824e671f821fd31812\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:15:46.154106 containerd[2077]: time="2025-12-16T12:15:46.153236586Z" level=info msg="Container c2248755a9fd4f2beb5fcd6411ea08c7f765d32d583a90c8522e5741c6eab195: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:15:46.173105 containerd[2077]: time="2025-12-16T12:15:46.173060853Z" level=info msg="CreateContainer within sandbox \"32b10c45a1587f1bb2c44a12858dd7682d9b57a0bc1568824e671f821fd31812\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c2248755a9fd4f2beb5fcd6411ea08c7f765d32d583a90c8522e5741c6eab195\"" Dec 16 12:15:46.174795 containerd[2077]: time="2025-12-16T12:15:46.173771178Z" level=info msg="StartContainer for \"c2248755a9fd4f2beb5fcd6411ea08c7f765d32d583a90c8522e5741c6eab195\"" Dec 16 12:15:46.176141 containerd[2077]: time="2025-12-16T12:15:46.176114596Z" level=info msg="connecting to shim c2248755a9fd4f2beb5fcd6411ea08c7f765d32d583a90c8522e5741c6eab195" address="unix:///run/containerd/s/a6987e319db5cb52b4a46f03281cde03578d01c5c997b939501eaca305e63c57" protocol=ttrpc version=3 Dec 16 12:15:46.192941 systemd[1]: Started cri-containerd-c2248755a9fd4f2beb5fcd6411ea08c7f765d32d583a90c8522e5741c6eab195.scope - libcontainer container c2248755a9fd4f2beb5fcd6411ea08c7f765d32d583a90c8522e5741c6eab195. Dec 16 12:15:46.233000 audit: BPF prog-id=196 op=LOAD Dec 16 12:15:46.238234 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 12:15:46.238334 kernel: audit: type=1334 audit(1765887346.233:598): prog-id=196 op=LOAD Dec 16 12:15:46.233000 audit[4570]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4093 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:46.260780 kernel: audit: type=1300 audit(1765887346.233:598): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4093 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:46.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332323438373535613966643466326265623566636436343131656130 Dec 16 12:15:46.241000 audit: BPF prog-id=197 op=LOAD Dec 16 12:15:46.283072 kernel: audit: type=1327 audit(1765887346.233:598): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332323438373535613966643466326265623566636436343131656130 Dec 16 12:15:46.283144 kernel: audit: type=1334 audit(1765887346.241:599): prog-id=197 op=LOAD Dec 16 12:15:46.241000 audit[4570]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4093 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:46.300556 kernel: audit: type=1300 audit(1765887346.241:599): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4093 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:46.241000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332323438373535613966643466326265623566636436343131656130 Dec 16 12:15:46.318886 kernel: audit: type=1327 audit(1765887346.241:599): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332323438373535613966643466326265623566636436343131656130 Dec 16 12:15:46.241000 audit: BPF prog-id=197 op=UNLOAD Dec 16 12:15:46.324058 kernel: audit: type=1334 audit(1765887346.241:600): prog-id=197 op=UNLOAD Dec 16 12:15:46.241000 audit[4570]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4093 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:46.340678 kernel: audit: type=1300 audit(1765887346.241:600): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4093 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:46.241000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332323438373535613966643466326265623566636436343131656130 Dec 16 12:15:46.342695 kubelet[3629]: I1216 12:15:46.341952 3629 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:15:46.358235 kernel: audit: type=1327 audit(1765887346.241:600): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332323438373535613966643466326265623566636436343131656130 Dec 16 12:15:46.241000 audit: BPF prog-id=196 op=UNLOAD Dec 16 12:15:46.363775 kernel: audit: type=1334 audit(1765887346.241:601): prog-id=196 op=UNLOAD Dec 16 12:15:46.241000 audit[4570]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4093 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:46.241000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332323438373535613966643466326265623566636436343131656130 Dec 16 12:15:46.241000 audit: BPF prog-id=198 op=LOAD Dec 16 12:15:46.241000 audit[4570]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4093 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:46.241000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332323438373535613966643466326265623566636436343131656130 Dec 16 12:15:46.378346 containerd[2077]: time="2025-12-16T12:15:46.378225682Z" level=info msg="StartContainer for \"c2248755a9fd4f2beb5fcd6411ea08c7f765d32d583a90c8522e5741c6eab195\" returns successfully" Dec 16 12:15:46.416000 audit[4606]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4606 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:46.416000 audit[4606]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffa30ad60 a2=0 a3=1 items=0 ppid=3775 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:46.416000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:46.419000 audit[4606]: NETFILTER_CFG table=nat:119 family=2 entries=19 op=nft_register_chain pid=4606 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:46.419000 audit[4606]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffffa30ad60 a2=0 a3=1 items=0 ppid=3775 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:46.419000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:46.596794 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:15:46.596935 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:15:46.634146 kubelet[3629]: I1216 12:15:46.633994 3629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jhbwd" podStartSLOduration=1.6686820679999999 podStartE2EDuration="15.633977235s" podCreationTimestamp="2025-12-16 12:15:31 +0000 UTC" firstStartedPulling="2025-12-16 12:15:32.149450202 +0000 UTC m=+20.750442904" lastFinishedPulling="2025-12-16 12:15:46.114745369 +0000 UTC m=+34.715738071" observedRunningTime="2025-12-16 12:15:46.62716584 +0000 UTC m=+35.228158550" watchObservedRunningTime="2025-12-16 12:15:46.633977235 +0000 UTC m=+35.234969937" Dec 16 12:15:46.810197 kubelet[3629]: I1216 12:15:46.810118 3629 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c207d796-3dc3-47ed-bf51-f866d808dda0-whisker-backend-key-pair\") pod \"c207d796-3dc3-47ed-bf51-f866d808dda0\" (UID: \"c207d796-3dc3-47ed-bf51-f866d808dda0\") " Dec 16 12:15:46.810717 kubelet[3629]: I1216 12:15:46.810569 3629 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d9cq\" (UniqueName: \"kubernetes.io/projected/c207d796-3dc3-47ed-bf51-f866d808dda0-kube-api-access-5d9cq\") pod \"c207d796-3dc3-47ed-bf51-f866d808dda0\" (UID: \"c207d796-3dc3-47ed-bf51-f866d808dda0\") " Dec 16 12:15:46.810949 kubelet[3629]: I1216 12:15:46.810901 3629 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c207d796-3dc3-47ed-bf51-f866d808dda0-whisker-ca-bundle\") pod \"c207d796-3dc3-47ed-bf51-f866d808dda0\" (UID: \"c207d796-3dc3-47ed-bf51-f866d808dda0\") " Dec 16 12:15:46.811915 kubelet[3629]: I1216 12:15:46.811893 3629 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c207d796-3dc3-47ed-bf51-f866d808dda0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c207d796-3dc3-47ed-bf51-f866d808dda0" (UID: "c207d796-3dc3-47ed-bf51-f866d808dda0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:15:46.814684 systemd[1]: var-lib-kubelet-pods-c207d796\x2d3dc3\x2d47ed\x2dbf51\x2df866d808dda0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5d9cq.mount: Deactivated successfully. Dec 16 12:15:46.814790 systemd[1]: var-lib-kubelet-pods-c207d796\x2d3dc3\x2d47ed\x2dbf51\x2df866d808dda0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:15:46.816808 kubelet[3629]: I1216 12:15:46.816485 3629 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c207d796-3dc3-47ed-bf51-f866d808dda0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c207d796-3dc3-47ed-bf51-f866d808dda0" (UID: "c207d796-3dc3-47ed-bf51-f866d808dda0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:15:46.817912 kubelet[3629]: I1216 12:15:46.817887 3629 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c207d796-3dc3-47ed-bf51-f866d808dda0-kube-api-access-5d9cq" (OuterVolumeSpecName: "kube-api-access-5d9cq") pod "c207d796-3dc3-47ed-bf51-f866d808dda0" (UID: "c207d796-3dc3-47ed-bf51-f866d808dda0"). InnerVolumeSpecName "kube-api-access-5d9cq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:15:46.912003 kubelet[3629]: I1216 12:15:46.911852 3629 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c207d796-3dc3-47ed-bf51-f866d808dda0-whisker-backend-key-pair\") on node \"ci-4547.0.0-a-623de6ebc0\" DevicePath \"\"" Dec 16 12:15:46.912003 kubelet[3629]: I1216 12:15:46.911892 3629 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5d9cq\" (UniqueName: \"kubernetes.io/projected/c207d796-3dc3-47ed-bf51-f866d808dda0-kube-api-access-5d9cq\") on node \"ci-4547.0.0-a-623de6ebc0\" DevicePath \"\"" Dec 16 12:15:46.912003 kubelet[3629]: I1216 12:15:46.911899 3629 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c207d796-3dc3-47ed-bf51-f866d808dda0-whisker-ca-bundle\") on node \"ci-4547.0.0-a-623de6ebc0\" DevicePath \"\"" Dec 16 12:15:47.484590 systemd[1]: Removed slice kubepods-besteffort-podc207d796_3dc3_47ed_bf51_f866d808dda0.slice - libcontainer container kubepods-besteffort-podc207d796_3dc3_47ed_bf51_f866d808dda0.slice. Dec 16 12:15:47.689352 systemd[1]: Created slice kubepods-besteffort-poda62f9a6a_f8d3_4852_be55_6d4bed6c90c8.slice - libcontainer container kubepods-besteffort-poda62f9a6a_f8d3_4852_be55_6d4bed6c90c8.slice. Dec 16 12:15:47.716034 kubelet[3629]: I1216 12:15:47.715930 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a62f9a6a-f8d3-4852-be55-6d4bed6c90c8-whisker-backend-key-pair\") pod \"whisker-568467779b-jqkm2\" (UID: \"a62f9a6a-f8d3-4852-be55-6d4bed6c90c8\") " pod="calico-system/whisker-568467779b-jqkm2" Dec 16 12:15:47.717009 kubelet[3629]: I1216 12:15:47.716335 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a62f9a6a-f8d3-4852-be55-6d4bed6c90c8-whisker-ca-bundle\") pod \"whisker-568467779b-jqkm2\" (UID: \"a62f9a6a-f8d3-4852-be55-6d4bed6c90c8\") " pod="calico-system/whisker-568467779b-jqkm2" Dec 16 12:15:47.717009 kubelet[3629]: I1216 12:15:47.716387 3629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxc47\" (UniqueName: \"kubernetes.io/projected/a62f9a6a-f8d3-4852-be55-6d4bed6c90c8-kube-api-access-vxc47\") pod \"whisker-568467779b-jqkm2\" (UID: \"a62f9a6a-f8d3-4852-be55-6d4bed6c90c8\") " pod="calico-system/whisker-568467779b-jqkm2" Dec 16 12:15:48.001718 containerd[2077]: time="2025-12-16T12:15:48.001564604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-568467779b-jqkm2,Uid:a62f9a6a-f8d3-4852-be55-6d4bed6c90c8,Namespace:calico-system,Attempt:0,}" Dec 16 12:15:48.197606 systemd-networkd[1659]: cali93a33620861: Link UP Dec 16 12:15:48.198686 systemd-networkd[1659]: cali93a33620861: Gained carrier Dec 16 12:15:48.207000 audit: BPF prog-id=199 op=LOAD Dec 16 12:15:48.207000 audit[4778]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff7253a28 a2=98 a3=fffff7253a18 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.207000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:15:48.209000 audit: BPF prog-id=199 op=UNLOAD Dec 16 12:15:48.209000 audit[4778]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff72539f8 a3=0 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.209000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:15:48.209000 audit: BPF prog-id=200 op=LOAD Dec 16 12:15:48.209000 audit[4778]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff72538d8 a2=74 a3=95 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.209000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:15:48.209000 audit: BPF prog-id=200 op=UNLOAD Dec 16 12:15:48.209000 audit[4778]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.209000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:15:48.209000 audit: BPF prog-id=201 op=LOAD Dec 16 12:15:48.209000 audit[4778]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff7253908 a2=40 a3=fffff7253938 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.209000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:15:48.209000 audit: BPF prog-id=201 op=UNLOAD Dec 16 12:15:48.209000 audit[4778]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff7253938 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.209000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:15:48.216724 containerd[2077]: 2025-12-16 12:15:48.047 [INFO][4727] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:15:48.216724 containerd[2077]: 2025-12-16 12:15:48.104 [INFO][4727] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--623de6ebc0-k8s-whisker--568467779b--jqkm2-eth0 whisker-568467779b- calico-system a62f9a6a-f8d3-4852-be55-6d4bed6c90c8 883 0 2025-12-16 12:15:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:568467779b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547.0.0-a-623de6ebc0 whisker-568467779b-jqkm2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali93a33620861 [] [] }} ContainerID="e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348" Namespace="calico-system" Pod="whisker-568467779b-jqkm2" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-whisker--568467779b--jqkm2-" Dec 16 12:15:48.216724 containerd[2077]: 2025-12-16 12:15:48.104 [INFO][4727] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348" Namespace="calico-system" Pod="whisker-568467779b-jqkm2" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-whisker--568467779b--jqkm2-eth0" Dec 16 12:15:48.216724 containerd[2077]: 2025-12-16 12:15:48.141 [INFO][4741] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348" HandleID="k8s-pod-network.e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348" Workload="ci--4547.0.0--a--623de6ebc0-k8s-whisker--568467779b--jqkm2-eth0" Dec 16 12:15:48.217091 containerd[2077]: 2025-12-16 12:15:48.141 [INFO][4741] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348" HandleID="k8s-pod-network.e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348" Workload="ci--4547.0.0--a--623de6ebc0-k8s-whisker--568467779b--jqkm2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aae00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-623de6ebc0", "pod":"whisker-568467779b-jqkm2", "timestamp":"2025-12-16 12:15:48.141016309 +0000 UTC"}, Hostname:"ci-4547.0.0-a-623de6ebc0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:15:48.217091 containerd[2077]: 2025-12-16 12:15:48.141 [INFO][4741] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:15:48.217091 containerd[2077]: 2025-12-16 12:15:48.141 [INFO][4741] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:15:48.217091 containerd[2077]: 2025-12-16 12:15:48.141 [INFO][4741] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-623de6ebc0' Dec 16 12:15:48.217091 containerd[2077]: 2025-12-16 12:15:48.148 [INFO][4741] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:48.217091 containerd[2077]: 2025-12-16 12:15:48.152 [INFO][4741] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:48.217091 containerd[2077]: 2025-12-16 12:15:48.156 [INFO][4741] ipam/ipam.go 511: Trying affinity for 192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:48.217091 containerd[2077]: 2025-12-16 12:15:48.157 [INFO][4741] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:48.217091 containerd[2077]: 2025-12-16 12:15:48.159 [INFO][4741] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:48.217235 containerd[2077]: 2025-12-16 12:15:48.159 [INFO][4741] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:48.217235 containerd[2077]: 2025-12-16 12:15:48.160 [INFO][4741] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348 Dec 16 12:15:48.217235 containerd[2077]: 2025-12-16 12:15:48.170 [INFO][4741] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:48.217235 containerd[2077]: 2025-12-16 12:15:48.180 [INFO][4741] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.108.65/26] block=192.168.108.64/26 handle="k8s-pod-network.e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:48.217235 containerd[2077]: 2025-12-16 12:15:48.180 [INFO][4741] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.65/26] handle="k8s-pod-network.e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:48.217235 containerd[2077]: 2025-12-16 12:15:48.180 [INFO][4741] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:15:48.217235 containerd[2077]: 2025-12-16 12:15:48.180 [INFO][4741] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.108.65/26] IPv6=[] ContainerID="e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348" HandleID="k8s-pod-network.e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348" Workload="ci--4547.0.0--a--623de6ebc0-k8s-whisker--568467779b--jqkm2-eth0" Dec 16 12:15:48.217324 containerd[2077]: 2025-12-16 12:15:48.186 [INFO][4727] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348" Namespace="calico-system" Pod="whisker-568467779b-jqkm2" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-whisker--568467779b--jqkm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--623de6ebc0-k8s-whisker--568467779b--jqkm2-eth0", GenerateName:"whisker-568467779b-", Namespace:"calico-system", SelfLink:"", UID:"a62f9a6a-f8d3-4852-be55-6d4bed6c90c8", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"568467779b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-623de6ebc0", ContainerID:"", Pod:"whisker-568467779b-jqkm2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.108.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali93a33620861", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:48.217324 containerd[2077]: 2025-12-16 12:15:48.186 [INFO][4727] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.65/32] ContainerID="e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348" Namespace="calico-system" Pod="whisker-568467779b-jqkm2" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-whisker--568467779b--jqkm2-eth0" Dec 16 12:15:48.217373 containerd[2077]: 2025-12-16 12:15:48.186 [INFO][4727] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93a33620861 ContainerID="e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348" Namespace="calico-system" Pod="whisker-568467779b-jqkm2" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-whisker--568467779b--jqkm2-eth0" Dec 16 12:15:48.217373 containerd[2077]: 2025-12-16 12:15:48.199 [INFO][4727] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348" Namespace="calico-system" Pod="whisker-568467779b-jqkm2" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-whisker--568467779b--jqkm2-eth0" Dec 16 12:15:48.217401 containerd[2077]: 2025-12-16 12:15:48.200 [INFO][4727] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348" Namespace="calico-system" Pod="whisker-568467779b-jqkm2" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-whisker--568467779b--jqkm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--623de6ebc0-k8s-whisker--568467779b--jqkm2-eth0", GenerateName:"whisker-568467779b-", Namespace:"calico-system", SelfLink:"", UID:"a62f9a6a-f8d3-4852-be55-6d4bed6c90c8", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"568467779b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-623de6ebc0", ContainerID:"e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348", Pod:"whisker-568467779b-jqkm2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.108.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali93a33620861", MAC:"4e:1d:c0:4f:a2:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:48.217705 containerd[2077]: 2025-12-16 12:15:48.213 [INFO][4727] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348" Namespace="calico-system" Pod="whisker-568467779b-jqkm2" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-whisker--568467779b--jqkm2-eth0" Dec 16 12:15:48.218000 audit: BPF prog-id=202 op=LOAD Dec 16 12:15:48.218000 audit[4784]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc0f4aff8 a2=98 a3=ffffc0f4afe8 items=0 ppid=4647 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.218000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:48.218000 audit: BPF prog-id=202 op=UNLOAD Dec 16 12:15:48.218000 audit[4784]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc0f4afc8 a3=0 items=0 ppid=4647 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.218000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:48.218000 audit: BPF prog-id=203 op=LOAD Dec 16 12:15:48.218000 audit[4784]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc0f4ac88 a2=74 a3=95 items=0 ppid=4647 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.218000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:48.221000 audit: BPF prog-id=203 op=UNLOAD Dec 16 12:15:48.221000 audit[4784]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4647 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.221000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:48.221000 audit: BPF prog-id=204 op=LOAD Dec 16 12:15:48.221000 audit[4784]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc0f4ace8 a2=94 a3=2 items=0 ppid=4647 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.221000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:48.221000 audit: BPF prog-id=204 op=UNLOAD Dec 16 12:15:48.221000 audit[4784]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4647 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.221000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:48.265806 containerd[2077]: time="2025-12-16T12:15:48.265679255Z" level=info msg="connecting to shim e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348" address="unix:///run/containerd/s/a26fed2cf6f4cd1f85beed55309a1b5620d7bce9cc667b431e695a52af9f3ae5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:48.290015 systemd[1]: Started cri-containerd-e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348.scope - libcontainer container e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348. Dec 16 12:15:48.303000 audit: BPF prog-id=205 op=LOAD Dec 16 12:15:48.304000 audit: BPF prog-id=206 op=LOAD Dec 16 12:15:48.304000 audit[4806]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=4794 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539376630303730316433653937376663343965623063623336656438 Dec 16 12:15:48.304000 audit: BPF prog-id=206 op=UNLOAD Dec 16 12:15:48.304000 audit[4806]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4794 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539376630303730316433653937376663343965623063623336656438 Dec 16 12:15:48.304000 audit: BPF prog-id=207 op=LOAD Dec 16 12:15:48.304000 audit[4806]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4794 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539376630303730316433653937376663343965623063623336656438 Dec 16 12:15:48.304000 audit: BPF prog-id=208 op=LOAD Dec 16 12:15:48.304000 audit[4806]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=4794 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539376630303730316433653937376663343965623063623336656438 Dec 16 12:15:48.305000 audit: BPF prog-id=208 op=UNLOAD Dec 16 12:15:48.305000 audit[4806]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4794 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539376630303730316433653937376663343965623063623336656438 Dec 16 12:15:48.305000 audit: BPF prog-id=207 op=UNLOAD Dec 16 12:15:48.305000 audit[4806]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4794 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539376630303730316433653937376663343965623063623336656438 Dec 16 12:15:48.305000 audit: BPF prog-id=209 op=LOAD Dec 16 12:15:48.305000 audit[4806]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=4794 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539376630303730316433653937376663343965623063623336656438 Dec 16 12:15:48.334824 containerd[2077]: time="2025-12-16T12:15:48.334779111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-568467779b-jqkm2,Uid:a62f9a6a-f8d3-4852-be55-6d4bed6c90c8,Namespace:calico-system,Attempt:0,} returns sandbox id \"e97f00701d3e977fc49eb0cb36ed8fbb4480b1d2ce7721ab8a8c84796c55b348\"" Dec 16 12:15:48.336384 containerd[2077]: time="2025-12-16T12:15:48.336355463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:15:48.358000 audit: BPF prog-id=210 op=LOAD Dec 16 12:15:48.358000 audit[4784]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc0f4aca8 a2=40 a3=ffffc0f4acd8 items=0 ppid=4647 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.358000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:48.358000 audit: BPF prog-id=210 op=UNLOAD Dec 16 12:15:48.358000 audit[4784]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc0f4acd8 items=0 ppid=4647 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.358000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:48.364000 audit: BPF prog-id=211 op=LOAD Dec 16 12:15:48.364000 audit[4784]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc0f4acb8 a2=94 a3=4 items=0 ppid=4647 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.364000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:48.365000 audit: BPF prog-id=211 op=UNLOAD Dec 16 12:15:48.365000 audit[4784]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4647 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.365000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:48.365000 audit: BPF prog-id=212 op=LOAD Dec 16 12:15:48.365000 audit[4784]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc0f4aaf8 a2=94 a3=5 items=0 ppid=4647 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.365000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:48.365000 audit: BPF prog-id=212 op=UNLOAD Dec 16 12:15:48.365000 audit[4784]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4647 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.365000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:48.365000 audit: BPF prog-id=213 op=LOAD Dec 16 12:15:48.365000 audit[4784]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc0f4ad28 a2=94 a3=6 items=0 ppid=4647 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.365000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:48.365000 audit: BPF prog-id=213 op=UNLOAD Dec 16 12:15:48.365000 audit[4784]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4647 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.365000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:48.365000 audit: BPF prog-id=214 op=LOAD Dec 16 12:15:48.365000 audit[4784]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc0f4a4f8 a2=94 a3=83 items=0 ppid=4647 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.365000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:48.365000 audit: BPF prog-id=215 op=LOAD Dec 16 12:15:48.365000 audit[4784]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc0f4a2b8 a2=94 a3=2 items=0 ppid=4647 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.365000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:48.365000 audit: BPF prog-id=215 op=UNLOAD Dec 16 12:15:48.365000 audit[4784]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4647 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.365000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:48.366000 audit: BPF prog-id=214 op=UNLOAD Dec 16 12:15:48.366000 audit[4784]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=23eb7620 a3=23eaab00 items=0 ppid=4647 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.366000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:48.371000 audit: BPF prog-id=216 op=LOAD Dec 16 12:15:48.371000 audit[4832]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf23e4e8 a2=98 a3=ffffcf23e4d8 items=0 ppid=4647 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.371000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:15:48.371000 audit: BPF prog-id=216 op=UNLOAD Dec 16 12:15:48.371000 audit[4832]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffcf23e4b8 a3=0 items=0 ppid=4647 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.371000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:15:48.371000 audit: BPF prog-id=217 op=LOAD Dec 16 12:15:48.371000 audit[4832]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf23e398 a2=74 a3=95 items=0 ppid=4647 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.371000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:15:48.372000 audit: BPF prog-id=217 op=UNLOAD Dec 16 12:15:48.372000 audit[4832]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4647 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.372000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:15:48.372000 audit: BPF prog-id=218 op=LOAD Dec 16 12:15:48.372000 audit[4832]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf23e3c8 a2=40 a3=ffffcf23e3f8 items=0 ppid=4647 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.372000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:15:48.372000 audit: BPF prog-id=218 op=UNLOAD Dec 16 12:15:48.372000 audit[4832]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffcf23e3f8 items=0 ppid=4647 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.372000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:15:48.471345 systemd-networkd[1659]: vxlan.calico: Link UP Dec 16 12:15:48.471352 systemd-networkd[1659]: vxlan.calico: Gained carrier Dec 16 12:15:48.490000 audit: BPF prog-id=219 op=LOAD Dec 16 12:15:48.490000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffff57ca38 a2=98 a3=ffffff57ca28 items=0 ppid=4647 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.490000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:48.490000 audit: BPF prog-id=219 op=UNLOAD Dec 16 12:15:48.490000 audit[4856]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffff57ca08 a3=0 items=0 ppid=4647 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.490000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:48.490000 audit: BPF prog-id=220 op=LOAD Dec 16 12:15:48.490000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffff57c718 a2=74 a3=95 items=0 ppid=4647 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.490000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:48.490000 audit: BPF prog-id=220 op=UNLOAD Dec 16 12:15:48.490000 audit[4856]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4647 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.490000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:48.490000 audit: BPF prog-id=221 op=LOAD Dec 16 12:15:48.490000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffff57c778 a2=94 a3=2 items=0 ppid=4647 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.490000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:48.490000 audit: BPF prog-id=221 op=UNLOAD Dec 16 12:15:48.490000 audit[4856]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4647 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.490000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:48.490000 audit: BPF prog-id=222 op=LOAD Dec 16 12:15:48.490000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffff57c5f8 a2=40 a3=ffffff57c628 items=0 ppid=4647 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.490000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:48.490000 audit: BPF prog-id=222 op=UNLOAD Dec 16 12:15:48.490000 audit[4856]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffff57c628 items=0 ppid=4647 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.490000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:48.490000 audit: BPF prog-id=223 op=LOAD Dec 16 12:15:48.490000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffff57c748 a2=94 a3=b7 items=0 ppid=4647 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.490000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:48.490000 audit: BPF prog-id=223 op=UNLOAD Dec 16 12:15:48.490000 audit[4856]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4647 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.490000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:48.491000 audit: BPF prog-id=224 op=LOAD Dec 16 12:15:48.491000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffff57bdf8 a2=94 a3=2 items=0 ppid=4647 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.491000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:48.491000 audit: BPF prog-id=224 op=UNLOAD Dec 16 12:15:48.491000 audit[4856]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4647 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.491000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:48.491000 audit: BPF prog-id=225 op=LOAD Dec 16 12:15:48.491000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffff57bf88 a2=94 a3=30 items=0 ppid=4647 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.491000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:48.494000 audit: BPF prog-id=226 op=LOAD Dec 16 12:15:48.494000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd6cdbed8 a2=98 a3=ffffd6cdbec8 items=0 ppid=4647 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.494000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:48.494000 audit: BPF prog-id=226 op=UNLOAD Dec 16 12:15:48.494000 audit[4860]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd6cdbea8 a3=0 items=0 ppid=4647 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.494000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:48.495000 audit: BPF prog-id=227 op=LOAD Dec 16 12:15:48.495000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd6cdbb68 a2=74 a3=95 items=0 ppid=4647 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.495000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:48.495000 audit: BPF prog-id=227 op=UNLOAD Dec 16 12:15:48.495000 audit[4860]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4647 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.495000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:48.495000 audit: BPF prog-id=228 op=LOAD Dec 16 12:15:48.495000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd6cdbbc8 a2=94 a3=2 items=0 ppid=4647 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.495000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:48.495000 audit: BPF prog-id=228 op=UNLOAD Dec 16 12:15:48.495000 audit[4860]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4647 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.495000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:48.580000 audit: BPF prog-id=229 op=LOAD Dec 16 12:15:48.580000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd6cdbb88 a2=40 a3=ffffd6cdbbb8 items=0 ppid=4647 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.580000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:48.580000 audit: BPF prog-id=229 op=UNLOAD Dec 16 12:15:48.580000 audit[4860]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffd6cdbbb8 items=0 ppid=4647 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.580000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:48.586000 audit: BPF prog-id=230 op=LOAD Dec 16 12:15:48.586000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd6cdbb98 a2=94 a3=4 items=0 ppid=4647 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.586000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:48.587000 audit: BPF prog-id=230 op=UNLOAD Dec 16 12:15:48.587000 audit[4860]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4647 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.587000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:48.587000 audit: BPF prog-id=231 op=LOAD Dec 16 12:15:48.587000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd6cdb9d8 a2=94 a3=5 items=0 ppid=4647 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.587000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:48.587000 audit: BPF prog-id=231 op=UNLOAD Dec 16 12:15:48.587000 audit[4860]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4647 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.587000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:48.587000 audit: BPF prog-id=232 op=LOAD Dec 16 12:15:48.587000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd6cdbc08 a2=94 a3=6 items=0 ppid=4647 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.587000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:48.587000 audit: BPF prog-id=232 op=UNLOAD Dec 16 12:15:48.587000 audit[4860]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4647 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.587000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:48.587000 audit: BPF prog-id=233 op=LOAD Dec 16 12:15:48.587000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd6cdb3d8 a2=94 a3=83 items=0 ppid=4647 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.587000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:48.587000 audit: BPF prog-id=234 op=LOAD Dec 16 12:15:48.587000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffd6cdb198 a2=94 a3=2 items=0 ppid=4647 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.587000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:48.587000 audit: BPF prog-id=234 op=UNLOAD Dec 16 12:15:48.587000 audit[4860]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4647 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.587000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:48.588000 audit: BPF prog-id=233 op=UNLOAD Dec 16 12:15:48.588000 audit[4860]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=2d103620 a3=2d0f6b00 items=0 ppid=4647 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.588000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:48.597000 audit: BPF prog-id=225 op=UNLOAD Dec 16 12:15:48.597000 audit[4647]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=40008e0800 a2=0 a3=0 items=0 ppid=4641 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.597000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 12:15:48.604282 containerd[2077]: time="2025-12-16T12:15:48.604240047Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:48.607683 containerd[2077]: time="2025-12-16T12:15:48.607617327Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:15:48.607776 containerd[2077]: time="2025-12-16T12:15:48.607702998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:48.611030 kubelet[3629]: E1216 12:15:48.610001 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:15:48.611030 kubelet[3629]: E1216 12:15:48.610064 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:15:48.611030 kubelet[3629]: E1216 12:15:48.610171 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-568467779b-jqkm2_calico-system(a62f9a6a-f8d3-4852-be55-6d4bed6c90c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:48.612921 containerd[2077]: time="2025-12-16T12:15:48.612863870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:15:48.678000 audit[4885]: NETFILTER_CFG table=nat:120 family=2 entries=15 op=nft_register_chain pid=4885 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:48.678000 audit[4885]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffecbf21b0 a2=0 a3=ffff93cf3fa8 items=0 ppid=4647 pid=4885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.678000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:48.681000 audit[4884]: NETFILTER_CFG table=raw:121 family=2 entries=21 op=nft_register_chain pid=4884 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:48.681000 audit[4884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffcefbff60 a2=0 a3=ffff96e2ffa8 items=0 ppid=4647 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.681000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:48.683000 audit[4892]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=4892 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:48.683000 audit[4892]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffc0e901e0 a2=0 a3=ffff8f9b7fa8 items=0 ppid=4647 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.683000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:48.696000 audit[4886]: NETFILTER_CFG table=filter:123 family=2 entries=94 op=nft_register_chain pid=4886 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:48.696000 audit[4886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffe9a20b90 a2=0 a3=ffffb126cfa8 items=0 ppid=4647 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:48.696000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:48.874658 containerd[2077]: time="2025-12-16T12:15:48.874386544Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:48.877593 containerd[2077]: time="2025-12-16T12:15:48.877493549Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:15:48.877593 containerd[2077]: time="2025-12-16T12:15:48.877551369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:48.877785 kubelet[3629]: E1216 12:15:48.877728 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:15:48.878362 kubelet[3629]: E1216 12:15:48.877787 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:15:48.878362 kubelet[3629]: E1216 12:15:48.877862 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-568467779b-jqkm2_calico-system(a62f9a6a-f8d3-4852-be55-6d4bed6c90c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:48.878362 kubelet[3629]: E1216 12:15:48.877895 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-568467779b-jqkm2" podUID="a62f9a6a-f8d3-4852-be55-6d4bed6c90c8" Dec 16 12:15:49.478846 kubelet[3629]: I1216 12:15:49.478807 3629 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c207d796-3dc3-47ed-bf51-f866d808dda0" path="/var/lib/kubelet/pods/c207d796-3dc3-47ed-bf51-f866d808dda0/volumes" Dec 16 12:15:49.620743 kubelet[3629]: E1216 12:15:49.620672 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-568467779b-jqkm2" podUID="a62f9a6a-f8d3-4852-be55-6d4bed6c90c8" Dec 16 12:15:49.690000 audit[4901]: NETFILTER_CFG table=filter:124 family=2 entries=20 op=nft_register_rule pid=4901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:49.690000 audit[4901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe039e320 a2=0 a3=1 items=0 ppid=3775 pid=4901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:49.690000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:49.696000 audit[4901]: NETFILTER_CFG table=nat:125 family=2 entries=14 op=nft_register_rule pid=4901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:49.696000 audit[4901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe039e320 a2=0 a3=1 items=0 ppid=3775 pid=4901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:49.696000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:49.882912 systemd-networkd[1659]: cali93a33620861: Gained IPv6LL Dec 16 12:15:50.458931 systemd-networkd[1659]: vxlan.calico: Gained IPv6LL Dec 16 12:15:52.483475 containerd[2077]: time="2025-12-16T12:15:52.483290415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8x7w7,Uid:306c4b20-3806-46c9-b86b-305f578267e9,Namespace:kube-system,Attempt:0,}" Dec 16 12:15:52.487377 containerd[2077]: time="2025-12-16T12:15:52.487342472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lft87,Uid:8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85,Namespace:calico-system,Attempt:0,}" Dec 16 12:15:52.606370 systemd-networkd[1659]: calif2351847ea2: Link UP Dec 16 12:15:52.606516 systemd-networkd[1659]: calif2351847ea2: Gained carrier Dec 16 12:15:52.624910 containerd[2077]: 2025-12-16 12:15:52.532 [INFO][4904] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--8x7w7-eth0 coredns-66bc5c9577- kube-system 306c4b20-3806-46c9-b86b-305f578267e9 810 0 2025-12-16 12:15:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-a-623de6ebc0 coredns-66bc5c9577-8x7w7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif2351847ea2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b" Namespace="kube-system" Pod="coredns-66bc5c9577-8x7w7" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--8x7w7-" Dec 16 12:15:52.624910 containerd[2077]: 2025-12-16 12:15:52.532 [INFO][4904] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b" Namespace="kube-system" Pod="coredns-66bc5c9577-8x7w7" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--8x7w7-eth0" Dec 16 12:15:52.624910 containerd[2077]: 2025-12-16 12:15:52.561 [INFO][4927] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b" HandleID="k8s-pod-network.7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b" Workload="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--8x7w7-eth0" Dec 16 12:15:52.625178 containerd[2077]: 2025-12-16 12:15:52.561 [INFO][4927] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b" HandleID="k8s-pod-network.7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b" Workload="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--8x7w7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b200), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-a-623de6ebc0", "pod":"coredns-66bc5c9577-8x7w7", "timestamp":"2025-12-16 12:15:52.561405097 +0000 UTC"}, Hostname:"ci-4547.0.0-a-623de6ebc0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:15:52.625178 containerd[2077]: 2025-12-16 12:15:52.561 [INFO][4927] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:15:52.625178 containerd[2077]: 2025-12-16 12:15:52.561 [INFO][4927] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:15:52.625178 containerd[2077]: 2025-12-16 12:15:52.561 [INFO][4927] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-623de6ebc0' Dec 16 12:15:52.625178 containerd[2077]: 2025-12-16 12:15:52.569 [INFO][4927] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:52.625178 containerd[2077]: 2025-12-16 12:15:52.573 [INFO][4927] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:52.625178 containerd[2077]: 2025-12-16 12:15:52.578 [INFO][4927] ipam/ipam.go 511: Trying affinity for 192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:52.625178 containerd[2077]: 2025-12-16 12:15:52.581 [INFO][4927] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:52.625178 containerd[2077]: 2025-12-16 12:15:52.582 [INFO][4927] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:52.625317 containerd[2077]: 2025-12-16 12:15:52.582 [INFO][4927] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:52.625317 containerd[2077]: 2025-12-16 12:15:52.584 [INFO][4927] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b Dec 16 12:15:52.625317 containerd[2077]: 2025-12-16 12:15:52.592 [INFO][4927] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:52.625317 containerd[2077]: 2025-12-16 12:15:52.597 [INFO][4927] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.108.66/26] block=192.168.108.64/26 handle="k8s-pod-network.7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:52.625317 containerd[2077]: 2025-12-16 12:15:52.598 [INFO][4927] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.66/26] handle="k8s-pod-network.7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:52.625317 containerd[2077]: 2025-12-16 12:15:52.598 [INFO][4927] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:15:52.625317 containerd[2077]: 2025-12-16 12:15:52.598 [INFO][4927] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.108.66/26] IPv6=[] ContainerID="7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b" HandleID="k8s-pod-network.7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b" Workload="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--8x7w7-eth0" Dec 16 12:15:52.625486 containerd[2077]: 2025-12-16 12:15:52.601 [INFO][4904] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b" Namespace="kube-system" Pod="coredns-66bc5c9577-8x7w7" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--8x7w7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--8x7w7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"306c4b20-3806-46c9-b86b-305f578267e9", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-623de6ebc0", ContainerID:"", Pod:"coredns-66bc5c9577-8x7w7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2351847ea2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:52.625486 containerd[2077]: 2025-12-16 12:15:52.601 [INFO][4904] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.66/32] ContainerID="7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b" Namespace="kube-system" Pod="coredns-66bc5c9577-8x7w7" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--8x7w7-eth0" Dec 16 12:15:52.625486 containerd[2077]: 2025-12-16 12:15:52.601 [INFO][4904] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2351847ea2 ContainerID="7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b" Namespace="kube-system" Pod="coredns-66bc5c9577-8x7w7" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--8x7w7-eth0" Dec 16 12:15:52.625486 containerd[2077]: 2025-12-16 12:15:52.606 [INFO][4904] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b" Namespace="kube-system" Pod="coredns-66bc5c9577-8x7w7" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--8x7w7-eth0" Dec 16 12:15:52.625486 containerd[2077]: 2025-12-16 12:15:52.607 [INFO][4904] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b" Namespace="kube-system" Pod="coredns-66bc5c9577-8x7w7" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--8x7w7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--8x7w7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"306c4b20-3806-46c9-b86b-305f578267e9", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-623de6ebc0", ContainerID:"7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b", Pod:"coredns-66bc5c9577-8x7w7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2351847ea2", MAC:"c6:05:0b:9a:a5:62", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:52.626078 containerd[2077]: 2025-12-16 12:15:52.621 [INFO][4904] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b" Namespace="kube-system" Pod="coredns-66bc5c9577-8x7w7" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--8x7w7-eth0" Dec 16 12:15:52.636000 audit[4951]: NETFILTER_CFG table=filter:126 family=2 entries=42 op=nft_register_chain pid=4951 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:52.641711 kernel: kauditd_printk_skb: 237 callbacks suppressed Dec 16 12:15:52.641800 kernel: audit: type=1325 audit(1765887352.636:681): table=filter:126 family=2 entries=42 op=nft_register_chain pid=4951 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:52.636000 audit[4951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=ffffc69113f0 a2=0 a3=ffff8f655fa8 items=0 ppid=4647 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.671476 kernel: audit: type=1300 audit(1765887352.636:681): arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=ffffc69113f0 a2=0 a3=ffff8f655fa8 items=0 ppid=4647 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.636000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:52.683099 kernel: audit: type=1327 audit(1765887352.636:681): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:52.703328 containerd[2077]: time="2025-12-16T12:15:52.703280415Z" level=info msg="connecting to shim 7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b" address="unix:///run/containerd/s/9c342105800c57cd920e7ab17c4e39ece51c051fb82a871d18a2dc59789f920b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:52.730950 systemd[1]: Started cri-containerd-7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b.scope - libcontainer container 7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b. Dec 16 12:15:52.736460 systemd-networkd[1659]: calicb88a408230: Link UP Dec 16 12:15:52.738948 systemd-networkd[1659]: calicb88a408230: Gained carrier Dec 16 12:15:52.750000 audit: BPF prog-id=235 op=LOAD Dec 16 12:15:52.755000 audit: BPF prog-id=236 op=LOAD Dec 16 12:15:52.762273 kernel: audit: type=1334 audit(1765887352.750:682): prog-id=235 op=LOAD Dec 16 12:15:52.762365 kernel: audit: type=1334 audit(1765887352.755:683): prog-id=236 op=LOAD Dec 16 12:15:52.762382 kernel: audit: type=1300 audit(1765887352.755:683): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=4960 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.755000 audit[4973]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=4960 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763376130366164343961633839393838633935613032646239356162 Dec 16 12:15:52.796269 kernel: audit: type=1327 audit(1765887352.755:683): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763376130366164343961633839393838633935613032646239356162 Dec 16 12:15:52.797444 kernel: audit: type=1334 audit(1765887352.755:684): prog-id=236 op=UNLOAD Dec 16 12:15:52.755000 audit: BPF prog-id=236 op=UNLOAD Dec 16 12:15:52.802033 containerd[2077]: 2025-12-16 12:15:52.548 [INFO][4913] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--623de6ebc0-k8s-csi--node--driver--lft87-eth0 csi-node-driver- calico-system 8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85 700 0 2025-12-16 12:15:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547.0.0-a-623de6ebc0 csi-node-driver-lft87 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicb88a408230 [] [] }} ContainerID="438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873" Namespace="calico-system" Pod="csi-node-driver-lft87" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-csi--node--driver--lft87-" Dec 16 12:15:52.802033 containerd[2077]: 2025-12-16 12:15:52.548 [INFO][4913] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873" Namespace="calico-system" Pod="csi-node-driver-lft87" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-csi--node--driver--lft87-eth0" Dec 16 12:15:52.802033 containerd[2077]: 2025-12-16 12:15:52.580 [INFO][4932] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873" HandleID="k8s-pod-network.438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873" Workload="ci--4547.0.0--a--623de6ebc0-k8s-csi--node--driver--lft87-eth0" Dec 16 12:15:52.802033 containerd[2077]: 2025-12-16 12:15:52.580 [INFO][4932] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873" HandleID="k8s-pod-network.438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873" Workload="ci--4547.0.0--a--623de6ebc0-k8s-csi--node--driver--lft87-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3820), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-623de6ebc0", "pod":"csi-node-driver-lft87", "timestamp":"2025-12-16 12:15:52.58051654 +0000 UTC"}, Hostname:"ci-4547.0.0-a-623de6ebc0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:15:52.802033 containerd[2077]: 2025-12-16 12:15:52.580 [INFO][4932] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:15:52.802033 containerd[2077]: 2025-12-16 12:15:52.598 [INFO][4932] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:15:52.802033 containerd[2077]: 2025-12-16 12:15:52.598 [INFO][4932] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-623de6ebc0' Dec 16 12:15:52.802033 containerd[2077]: 2025-12-16 12:15:52.684 [INFO][4932] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:52.802033 containerd[2077]: 2025-12-16 12:15:52.693 [INFO][4932] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:52.802033 containerd[2077]: 2025-12-16 12:15:52.697 [INFO][4932] ipam/ipam.go 511: Trying affinity for 192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:52.802033 containerd[2077]: 2025-12-16 12:15:52.699 [INFO][4932] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:52.802033 containerd[2077]: 2025-12-16 12:15:52.703 [INFO][4932] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:52.802033 containerd[2077]: 2025-12-16 12:15:52.703 [INFO][4932] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:52.802033 containerd[2077]: 2025-12-16 12:15:52.706 [INFO][4932] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873 Dec 16 12:15:52.802033 containerd[2077]: 2025-12-16 12:15:52.713 [INFO][4932] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:52.802033 containerd[2077]: 2025-12-16 12:15:52.729 [INFO][4932] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.108.67/26] block=192.168.108.64/26 handle="k8s-pod-network.438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:52.802033 containerd[2077]: 2025-12-16 12:15:52.729 [INFO][4932] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.67/26] handle="k8s-pod-network.438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:52.802033 containerd[2077]: 2025-12-16 12:15:52.729 [INFO][4932] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:15:52.802033 containerd[2077]: 2025-12-16 12:15:52.729 [INFO][4932] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.108.67/26] IPv6=[] ContainerID="438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873" HandleID="k8s-pod-network.438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873" Workload="ci--4547.0.0--a--623de6ebc0-k8s-csi--node--driver--lft87-eth0" Dec 16 12:15:52.802456 containerd[2077]: 2025-12-16 12:15:52.732 [INFO][4913] cni-plugin/k8s.go 418: Populated endpoint ContainerID="438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873" Namespace="calico-system" Pod="csi-node-driver-lft87" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-csi--node--driver--lft87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--623de6ebc0-k8s-csi--node--driver--lft87-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-623de6ebc0", ContainerID:"", Pod:"csi-node-driver-lft87", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicb88a408230", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:52.802456 containerd[2077]: 2025-12-16 12:15:52.732 [INFO][4913] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.67/32] ContainerID="438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873" Namespace="calico-system" Pod="csi-node-driver-lft87" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-csi--node--driver--lft87-eth0" Dec 16 12:15:52.802456 containerd[2077]: 2025-12-16 12:15:52.732 [INFO][4913] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb88a408230 ContainerID="438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873" Namespace="calico-system" Pod="csi-node-driver-lft87" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-csi--node--driver--lft87-eth0" Dec 16 12:15:52.802456 containerd[2077]: 2025-12-16 12:15:52.741 [INFO][4913] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873" Namespace="calico-system" Pod="csi-node-driver-lft87" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-csi--node--driver--lft87-eth0" Dec 16 12:15:52.802456 containerd[2077]: 2025-12-16 12:15:52.742 [INFO][4913] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873" Namespace="calico-system" Pod="csi-node-driver-lft87" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-csi--node--driver--lft87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--623de6ebc0-k8s-csi--node--driver--lft87-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-623de6ebc0", ContainerID:"438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873", Pod:"csi-node-driver-lft87", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicb88a408230", MAC:"f2:45:88:91:82:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:52.802456 containerd[2077]: 2025-12-16 12:15:52.780 [INFO][4913] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873" Namespace="calico-system" Pod="csi-node-driver-lft87" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-csi--node--driver--lft87-eth0" Dec 16 12:15:52.755000 audit[4973]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4960 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.821038 kernel: audit: type=1300 audit(1765887352.755:684): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4960 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763376130366164343961633839393838633935613032646239356162 Dec 16 12:15:52.848771 kernel: audit: type=1327 audit(1765887352.755:684): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763376130366164343961633839393838633935613032646239356162 Dec 16 12:15:52.755000 audit: BPF prog-id=237 op=LOAD Dec 16 12:15:52.755000 audit[4973]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=4960 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763376130366164343961633839393838633935613032646239356162 Dec 16 12:15:52.777000 audit: BPF prog-id=238 op=LOAD Dec 16 12:15:52.777000 audit[4973]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=4960 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763376130366164343961633839393838633935613032646239356162 Dec 16 12:15:52.777000 audit: BPF prog-id=238 op=UNLOAD Dec 16 12:15:52.777000 audit[4973]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4960 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763376130366164343961633839393838633935613032646239356162 Dec 16 12:15:52.777000 audit: BPF prog-id=237 op=UNLOAD Dec 16 12:15:52.777000 audit[4973]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4960 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763376130366164343961633839393838633935613032646239356162 Dec 16 12:15:52.778000 audit: BPF prog-id=239 op=LOAD Dec 16 12:15:52.778000 audit[4973]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=4960 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763376130366164343961633839393838633935613032646239356162 Dec 16 12:15:52.857000 audit[5007]: NETFILTER_CFG table=filter:127 family=2 entries=40 op=nft_register_chain pid=5007 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:52.857000 audit[5007]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20764 a0=3 a1=ffffe0327ac0 a2=0 a3=ffff9005dfa8 items=0 ppid=4647 pid=5007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.857000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:52.864683 containerd[2077]: time="2025-12-16T12:15:52.864641058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8x7w7,Uid:306c4b20-3806-46c9-b86b-305f578267e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b\"" Dec 16 12:15:52.873530 containerd[2077]: time="2025-12-16T12:15:52.873491905Z" level=info msg="CreateContainer within sandbox \"7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:15:52.891712 containerd[2077]: time="2025-12-16T12:15:52.891161101Z" level=info msg="connecting to shim 438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873" address="unix:///run/containerd/s/e336021e4c84f57aa10ab98cf4019792cfb69938d37cc3550afe0506346199ac" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:52.911008 systemd[1]: Started cri-containerd-438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873.scope - libcontainer container 438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873. Dec 16 12:15:52.913027 containerd[2077]: time="2025-12-16T12:15:52.912997306Z" level=info msg="Container 5b72ee13e2297910a6d9657dba8d6a8d5bd31bec11883eb1e0d4c7b1529248b1: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:15:52.921000 audit: BPF prog-id=240 op=LOAD Dec 16 12:15:52.921000 audit: BPF prog-id=241 op=LOAD Dec 16 12:15:52.921000 audit[5027]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=5016 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433386431626566613135393663303037323735666538323030326339 Dec 16 12:15:52.921000 audit: BPF prog-id=241 op=UNLOAD Dec 16 12:15:52.921000 audit[5027]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5016 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433386431626566613135393663303037323735666538323030326339 Dec 16 12:15:52.921000 audit: BPF prog-id=242 op=LOAD Dec 16 12:15:52.921000 audit[5027]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=5016 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433386431626566613135393663303037323735666538323030326339 Dec 16 12:15:52.922000 audit: BPF prog-id=243 op=LOAD Dec 16 12:15:52.922000 audit[5027]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=5016 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433386431626566613135393663303037323735666538323030326339 Dec 16 12:15:52.922000 audit: BPF prog-id=243 op=UNLOAD Dec 16 12:15:52.922000 audit[5027]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5016 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433386431626566613135393663303037323735666538323030326339 Dec 16 12:15:52.922000 audit: BPF prog-id=242 op=UNLOAD Dec 16 12:15:52.922000 audit[5027]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5016 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433386431626566613135393663303037323735666538323030326339 Dec 16 12:15:52.922000 audit: BPF prog-id=244 op=LOAD Dec 16 12:15:52.922000 audit[5027]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=5016 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433386431626566613135393663303037323735666538323030326339 Dec 16 12:15:52.931675 containerd[2077]: time="2025-12-16T12:15:52.931643468Z" level=info msg="CreateContainer within sandbox \"7c7a06ad49ac89988c95a02db95ab2bdca374ac2b710ebb93e27c0e6e7eae02b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b72ee13e2297910a6d9657dba8d6a8d5bd31bec11883eb1e0d4c7b1529248b1\"" Dec 16 12:15:52.932914 containerd[2077]: time="2025-12-16T12:15:52.932888573Z" level=info msg="StartContainer for \"5b72ee13e2297910a6d9657dba8d6a8d5bd31bec11883eb1e0d4c7b1529248b1\"" Dec 16 12:15:52.934508 containerd[2077]: time="2025-12-16T12:15:52.934425314Z" level=info msg="connecting to shim 5b72ee13e2297910a6d9657dba8d6a8d5bd31bec11883eb1e0d4c7b1529248b1" address="unix:///run/containerd/s/9c342105800c57cd920e7ab17c4e39ece51c051fb82a871d18a2dc59789f920b" protocol=ttrpc version=3 Dec 16 12:15:52.941314 containerd[2077]: time="2025-12-16T12:15:52.941275131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lft87,Uid:8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85,Namespace:calico-system,Attempt:0,} returns sandbox id \"438d1befa1596c007275fe82002c9effccc10adc7a659a3a3537bf432a7c2873\"" Dec 16 12:15:52.943237 containerd[2077]: time="2025-12-16T12:15:52.943204340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:15:52.954952 systemd[1]: Started cri-containerd-5b72ee13e2297910a6d9657dba8d6a8d5bd31bec11883eb1e0d4c7b1529248b1.scope - libcontainer container 5b72ee13e2297910a6d9657dba8d6a8d5bd31bec11883eb1e0d4c7b1529248b1. Dec 16 12:15:52.965000 audit: BPF prog-id=245 op=LOAD Dec 16 12:15:52.965000 audit: BPF prog-id=246 op=LOAD Dec 16 12:15:52.965000 audit[5053]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4960 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562373265653133653232393739313061366439363537646261386436 Dec 16 12:15:52.966000 audit: BPF prog-id=246 op=UNLOAD Dec 16 12:15:52.966000 audit[5053]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4960 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562373265653133653232393739313061366439363537646261386436 Dec 16 12:15:52.966000 audit: BPF prog-id=247 op=LOAD Dec 16 12:15:52.966000 audit[5053]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4960 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562373265653133653232393739313061366439363537646261386436 Dec 16 12:15:52.966000 audit: BPF prog-id=248 op=LOAD Dec 16 12:15:52.966000 audit[5053]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4960 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562373265653133653232393739313061366439363537646261386436 Dec 16 12:15:52.966000 audit: BPF prog-id=248 op=UNLOAD Dec 16 12:15:52.966000 audit[5053]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4960 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562373265653133653232393739313061366439363537646261386436 Dec 16 12:15:52.966000 audit: BPF prog-id=247 op=UNLOAD Dec 16 12:15:52.966000 audit[5053]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4960 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562373265653133653232393739313061366439363537646261386436 Dec 16 12:15:52.967000 audit: BPF prog-id=249 op=LOAD Dec 16 12:15:52.967000 audit[5053]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4960 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:52.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562373265653133653232393739313061366439363537646261386436 Dec 16 12:15:52.989534 containerd[2077]: time="2025-12-16T12:15:52.989426356Z" level=info msg="StartContainer for \"5b72ee13e2297910a6d9657dba8d6a8d5bd31bec11883eb1e0d4c7b1529248b1\" returns successfully" Dec 16 12:15:53.201126 containerd[2077]: time="2025-12-16T12:15:53.201078010Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:53.204156 containerd[2077]: time="2025-12-16T12:15:53.204051062Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:15:53.204156 containerd[2077]: time="2025-12-16T12:15:53.204104250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:53.204332 kubelet[3629]: E1216 12:15:53.204288 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:15:53.204643 kubelet[3629]: E1216 12:15:53.204335 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:15:53.204643 kubelet[3629]: E1216 12:15:53.204401 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-lft87_calico-system(8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:53.206053 containerd[2077]: time="2025-12-16T12:15:53.206026939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:15:53.471502 containerd[2077]: time="2025-12-16T12:15:53.471313239Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:53.474477 containerd[2077]: time="2025-12-16T12:15:53.474374351Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:15:53.474477 containerd[2077]: time="2025-12-16T12:15:53.474431915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:53.474952 kubelet[3629]: E1216 12:15:53.474742 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:15:53.474952 kubelet[3629]: E1216 12:15:53.474800 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:15:53.474952 kubelet[3629]: E1216 12:15:53.474873 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-lft87_calico-system(8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:53.474952 kubelet[3629]: E1216 12:15:53.474905 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lft87" podUID="8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85" Dec 16 12:15:53.486945 containerd[2077]: time="2025-12-16T12:15:53.486884619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f8f4c6db-glr2q,Uid:a081eac5-c790-4263-a08c-1af1e10fce20,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:15:53.592258 systemd-networkd[1659]: cali3bc8f32cf78: Link UP Dec 16 12:15:53.593596 systemd-networkd[1659]: cali3bc8f32cf78: Gained carrier Dec 16 12:15:53.614073 containerd[2077]: 2025-12-16 12:15:53.523 [INFO][5084] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--glr2q-eth0 calico-apiserver-65f8f4c6db- calico-apiserver a081eac5-c790-4263-a08c-1af1e10fce20 811 0 2025-12-16 12:15:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65f8f4c6db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-a-623de6ebc0 calico-apiserver-65f8f4c6db-glr2q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3bc8f32cf78 [] [] }} ContainerID="8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f" Namespace="calico-apiserver" Pod="calico-apiserver-65f8f4c6db-glr2q" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--glr2q-" Dec 16 12:15:53.614073 containerd[2077]: 2025-12-16 12:15:53.523 [INFO][5084] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f" Namespace="calico-apiserver" Pod="calico-apiserver-65f8f4c6db-glr2q" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--glr2q-eth0" Dec 16 12:15:53.614073 containerd[2077]: 2025-12-16 12:15:53.547 [INFO][5097] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f" HandleID="k8s-pod-network.8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f" Workload="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--glr2q-eth0" Dec 16 12:15:53.614073 containerd[2077]: 2025-12-16 12:15:53.547 [INFO][5097] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f" HandleID="k8s-pod-network.8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f" Workload="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--glr2q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000331240), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-a-623de6ebc0", "pod":"calico-apiserver-65f8f4c6db-glr2q", "timestamp":"2025-12-16 12:15:53.547541008 +0000 UTC"}, Hostname:"ci-4547.0.0-a-623de6ebc0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:15:53.614073 containerd[2077]: 2025-12-16 12:15:53.547 [INFO][5097] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:15:53.614073 containerd[2077]: 2025-12-16 12:15:53.547 [INFO][5097] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:15:53.614073 containerd[2077]: 2025-12-16 12:15:53.547 [INFO][5097] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-623de6ebc0' Dec 16 12:15:53.614073 containerd[2077]: 2025-12-16 12:15:53.554 [INFO][5097] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:53.614073 containerd[2077]: 2025-12-16 12:15:53.557 [INFO][5097] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:53.614073 containerd[2077]: 2025-12-16 12:15:53.562 [INFO][5097] ipam/ipam.go 511: Trying affinity for 192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:53.614073 containerd[2077]: 2025-12-16 12:15:53.564 [INFO][5097] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:53.614073 containerd[2077]: 2025-12-16 12:15:53.565 [INFO][5097] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:53.614073 containerd[2077]: 2025-12-16 12:15:53.566 [INFO][5097] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:53.614073 containerd[2077]: 2025-12-16 12:15:53.567 [INFO][5097] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f Dec 16 12:15:53.614073 containerd[2077]: 2025-12-16 12:15:53.573 [INFO][5097] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:53.614073 containerd[2077]: 2025-12-16 12:15:53.583 [INFO][5097] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.108.68/26] block=192.168.108.64/26 handle="k8s-pod-network.8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:53.614073 containerd[2077]: 2025-12-16 12:15:53.583 [INFO][5097] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.68/26] handle="k8s-pod-network.8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:53.614073 containerd[2077]: 2025-12-16 12:15:53.583 [INFO][5097] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:15:53.614073 containerd[2077]: 2025-12-16 12:15:53.583 [INFO][5097] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.108.68/26] IPv6=[] ContainerID="8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f" HandleID="k8s-pod-network.8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f" Workload="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--glr2q-eth0" Dec 16 12:15:53.615576 containerd[2077]: 2025-12-16 12:15:53.585 [INFO][5084] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f" Namespace="calico-apiserver" Pod="calico-apiserver-65f8f4c6db-glr2q" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--glr2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--glr2q-eth0", GenerateName:"calico-apiserver-65f8f4c6db-", Namespace:"calico-apiserver", SelfLink:"", UID:"a081eac5-c790-4263-a08c-1af1e10fce20", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65f8f4c6db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-623de6ebc0", ContainerID:"", Pod:"calico-apiserver-65f8f4c6db-glr2q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3bc8f32cf78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:53.615576 containerd[2077]: 2025-12-16 12:15:53.585 [INFO][5084] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.68/32] ContainerID="8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f" Namespace="calico-apiserver" Pod="calico-apiserver-65f8f4c6db-glr2q" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--glr2q-eth0" Dec 16 12:15:53.615576 containerd[2077]: 2025-12-16 12:15:53.586 [INFO][5084] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3bc8f32cf78 ContainerID="8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f" Namespace="calico-apiserver" Pod="calico-apiserver-65f8f4c6db-glr2q" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--glr2q-eth0" Dec 16 12:15:53.615576 containerd[2077]: 2025-12-16 12:15:53.592 [INFO][5084] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f" Namespace="calico-apiserver" Pod="calico-apiserver-65f8f4c6db-glr2q" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--glr2q-eth0" Dec 16 12:15:53.615576 containerd[2077]: 2025-12-16 12:15:53.593 [INFO][5084] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f" Namespace="calico-apiserver" Pod="calico-apiserver-65f8f4c6db-glr2q" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--glr2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--glr2q-eth0", GenerateName:"calico-apiserver-65f8f4c6db-", Namespace:"calico-apiserver", SelfLink:"", UID:"a081eac5-c790-4263-a08c-1af1e10fce20", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65f8f4c6db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-623de6ebc0", ContainerID:"8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f", Pod:"calico-apiserver-65f8f4c6db-glr2q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3bc8f32cf78", MAC:"02:6f:1a:3b:f8:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:53.615576 containerd[2077]: 2025-12-16 12:15:53.610 [INFO][5084] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f" Namespace="calico-apiserver" Pod="calico-apiserver-65f8f4c6db-glr2q" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--glr2q-eth0" Dec 16 12:15:53.630707 kubelet[3629]: E1216 12:15:53.630629 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lft87" podUID="8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85" Dec 16 12:15:53.640000 audit[5112]: NETFILTER_CFG table=filter:128 family=2 entries=58 op=nft_register_chain pid=5112 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:53.640000 audit[5112]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30584 a0=3 a1=ffffcaa06f70 a2=0 a3=ffffb5e73fa8 items=0 ppid=4647 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:53.640000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:53.663523 containerd[2077]: time="2025-12-16T12:15:53.663379049Z" level=info msg="connecting to shim 8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f" address="unix:///run/containerd/s/8c62655406bbe1f535ed6205b98397537115cdfd75f3fa70bc26abafd2245b9c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:53.675730 kubelet[3629]: I1216 12:15:53.675385 3629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8x7w7" podStartSLOduration=37.674906035 podStartE2EDuration="37.674906035s" podCreationTimestamp="2025-12-16 12:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:15:53.674585689 +0000 UTC m=+42.275578391" watchObservedRunningTime="2025-12-16 12:15:53.674906035 +0000 UTC m=+42.275898737" Dec 16 12:15:53.697081 systemd[1]: Started cri-containerd-8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f.scope - libcontainer container 8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f. Dec 16 12:15:53.702000 audit[5151]: NETFILTER_CFG table=filter:129 family=2 entries=17 op=nft_register_rule pid=5151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:53.702000 audit[5151]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdd61aaa0 a2=0 a3=1 items=0 ppid=3775 pid=5151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:53.702000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:53.707000 audit[5151]: NETFILTER_CFG table=nat:130 family=2 entries=35 op=nft_register_chain pid=5151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:53.707000 audit[5151]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffdd61aaa0 a2=0 a3=1 items=0 ppid=3775 pid=5151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:53.707000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:53.723082 systemd-networkd[1659]: calif2351847ea2: Gained IPv6LL Dec 16 12:15:53.728000 audit: BPF prog-id=250 op=LOAD Dec 16 12:15:53.729000 audit: BPF prog-id=251 op=LOAD Dec 16 12:15:53.729000 audit[5134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5121 pid=5134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:53.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861613136656163636663613437643730333237663937363964333338 Dec 16 12:15:53.729000 audit: BPF prog-id=251 op=UNLOAD Dec 16 12:15:53.729000 audit[5134]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5121 pid=5134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:53.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861613136656163636663613437643730333237663937363964333338 Dec 16 12:15:53.729000 audit: BPF prog-id=252 op=LOAD Dec 16 12:15:53.729000 audit[5134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5121 pid=5134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:53.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861613136656163636663613437643730333237663937363964333338 Dec 16 12:15:53.729000 audit: BPF prog-id=253 op=LOAD Dec 16 12:15:53.729000 audit[5134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5121 pid=5134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:53.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861613136656163636663613437643730333237663937363964333338 Dec 16 12:15:53.729000 audit: BPF prog-id=253 op=UNLOAD Dec 16 12:15:53.729000 audit[5134]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5121 pid=5134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:53.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861613136656163636663613437643730333237663937363964333338 Dec 16 12:15:53.729000 audit: BPF prog-id=252 op=UNLOAD Dec 16 12:15:53.729000 audit[5134]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5121 pid=5134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:53.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861613136656163636663613437643730333237663937363964333338 Dec 16 12:15:53.729000 audit: BPF prog-id=254 op=LOAD Dec 16 12:15:53.729000 audit[5134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5121 pid=5134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:53.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861613136656163636663613437643730333237663937363964333338 Dec 16 12:15:53.738801 kubelet[3629]: I1216 12:15:53.738538 3629 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:15:53.758141 containerd[2077]: time="2025-12-16T12:15:53.758076087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f8f4c6db-glr2q,Uid:a081eac5-c790-4263-a08c-1af1e10fce20,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8aa16eaccfca47d70327f9769d338769b72af6f508e23dd3ea836627acb54a8f\"" Dec 16 12:15:53.760378 containerd[2077]: time="2025-12-16T12:15:53.760307193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:15:53.850932 systemd-networkd[1659]: calicb88a408230: Gained IPv6LL Dec 16 12:15:54.062566 containerd[2077]: time="2025-12-16T12:15:54.062510554Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:54.066637 containerd[2077]: time="2025-12-16T12:15:54.066595423Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:15:54.066736 containerd[2077]: time="2025-12-16T12:15:54.066684863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:54.066949 kubelet[3629]: E1216 12:15:54.066902 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:15:54.067003 kubelet[3629]: E1216 12:15:54.066954 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:15:54.067051 kubelet[3629]: E1216 12:15:54.067034 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65f8f4c6db-glr2q_calico-apiserver(a081eac5-c790-4263-a08c-1af1e10fce20): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:54.067170 kubelet[3629]: E1216 12:15:54.067064 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-glr2q" podUID="a081eac5-c790-4263-a08c-1af1e10fce20" Dec 16 12:15:54.638643 kubelet[3629]: E1216 12:15:54.638212 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-glr2q" podUID="a081eac5-c790-4263-a08c-1af1e10fce20" Dec 16 12:15:54.640017 kubelet[3629]: E1216 12:15:54.639989 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lft87" podUID="8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85" Dec 16 12:15:54.725000 audit[5217]: NETFILTER_CFG table=filter:131 family=2 entries=14 op=nft_register_rule pid=5217 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:54.725000 audit[5217]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffee73da70 a2=0 a3=1 items=0 ppid=3775 pid=5217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:54.725000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:54.730000 audit[5217]: NETFILTER_CFG table=nat:132 family=2 entries=20 op=nft_register_rule pid=5217 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:54.730000 audit[5217]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffee73da70 a2=0 a3=1 items=0 ppid=3775 pid=5217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:54.730000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:55.130929 systemd-networkd[1659]: cali3bc8f32cf78: Gained IPv6LL Dec 16 12:15:55.482356 containerd[2077]: time="2025-12-16T12:15:55.482241700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d7m9p,Uid:1578eaf6-7b31-404e-823c-f6b50cad689e,Namespace:kube-system,Attempt:0,}" Dec 16 12:15:55.486664 containerd[2077]: time="2025-12-16T12:15:55.486603159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f8f4c6db-nsvdz,Uid:934998e2-1bd4-4baf-a9e2-cc5a0c414cea,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:15:55.494250 containerd[2077]: time="2025-12-16T12:15:55.494181920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xnsjr,Uid:42e08362-84ba-4be1-b9a5-3a3391796c9d,Namespace:calico-system,Attempt:0,}" Dec 16 12:15:55.523236 containerd[2077]: time="2025-12-16T12:15:55.523181532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6574577cd4-cw4js,Uid:4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7,Namespace:calico-system,Attempt:0,}" Dec 16 12:15:55.640855 kubelet[3629]: E1216 12:15:55.640742 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-glr2q" podUID="a081eac5-c790-4263-a08c-1af1e10fce20" Dec 16 12:15:55.705188 systemd-networkd[1659]: cali95b81e4a56a: Link UP Dec 16 12:15:55.706367 systemd-networkd[1659]: cali95b81e4a56a: Gained carrier Dec 16 12:15:55.727868 containerd[2077]: 2025-12-16 12:15:55.582 [INFO][5219] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--d7m9p-eth0 coredns-66bc5c9577- kube-system 1578eaf6-7b31-404e-823c-f6b50cad689e 816 0 2025-12-16 12:15:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-a-623de6ebc0 coredns-66bc5c9577-d7m9p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali95b81e4a56a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7" Namespace="kube-system" Pod="coredns-66bc5c9577-d7m9p" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--d7m9p-" Dec 16 12:15:55.727868 containerd[2077]: 2025-12-16 12:15:55.582 [INFO][5219] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7" Namespace="kube-system" Pod="coredns-66bc5c9577-d7m9p" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--d7m9p-eth0" Dec 16 12:15:55.727868 containerd[2077]: 2025-12-16 12:15:55.623 [INFO][5265] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7" HandleID="k8s-pod-network.f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7" Workload="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--d7m9p-eth0" Dec 16 12:15:55.727868 containerd[2077]: 2025-12-16 12:15:55.623 [INFO][5265] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7" HandleID="k8s-pod-network.f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7" Workload="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--d7m9p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3600), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-a-623de6ebc0", "pod":"coredns-66bc5c9577-d7m9p", "timestamp":"2025-12-16 12:15:55.623004293 +0000 UTC"}, Hostname:"ci-4547.0.0-a-623de6ebc0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:15:55.727868 containerd[2077]: 2025-12-16 12:15:55.623 [INFO][5265] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:15:55.727868 containerd[2077]: 2025-12-16 12:15:55.623 [INFO][5265] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:15:55.727868 containerd[2077]: 2025-12-16 12:15:55.623 [INFO][5265] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-623de6ebc0' Dec 16 12:15:55.727868 containerd[2077]: 2025-12-16 12:15:55.631 [INFO][5265] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.727868 containerd[2077]: 2025-12-16 12:15:55.637 [INFO][5265] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.727868 containerd[2077]: 2025-12-16 12:15:55.655 [INFO][5265] ipam/ipam.go 511: Trying affinity for 192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.727868 containerd[2077]: 2025-12-16 12:15:55.659 [INFO][5265] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.727868 containerd[2077]: 2025-12-16 12:15:55.668 [INFO][5265] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.727868 containerd[2077]: 2025-12-16 12:15:55.668 [INFO][5265] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.727868 containerd[2077]: 2025-12-16 12:15:55.672 [INFO][5265] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7 Dec 16 12:15:55.727868 containerd[2077]: 2025-12-16 12:15:55.683 [INFO][5265] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.727868 containerd[2077]: 2025-12-16 12:15:55.693 [INFO][5265] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.108.69/26] block=192.168.108.64/26 handle="k8s-pod-network.f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.727868 containerd[2077]: 2025-12-16 12:15:55.693 [INFO][5265] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.69/26] handle="k8s-pod-network.f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.727868 containerd[2077]: 2025-12-16 12:15:55.693 [INFO][5265] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:15:55.727868 containerd[2077]: 2025-12-16 12:15:55.693 [INFO][5265] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.108.69/26] IPv6=[] ContainerID="f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7" HandleID="k8s-pod-network.f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7" Workload="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--d7m9p-eth0" Dec 16 12:15:55.728333 containerd[2077]: 2025-12-16 12:15:55.698 [INFO][5219] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7" Namespace="kube-system" Pod="coredns-66bc5c9577-d7m9p" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--d7m9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--d7m9p-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1578eaf6-7b31-404e-823c-f6b50cad689e", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-623de6ebc0", ContainerID:"", Pod:"coredns-66bc5c9577-d7m9p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95b81e4a56a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:55.728333 containerd[2077]: 2025-12-16 12:15:55.699 [INFO][5219] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.69/32] ContainerID="f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7" Namespace="kube-system" Pod="coredns-66bc5c9577-d7m9p" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--d7m9p-eth0" Dec 16 12:15:55.728333 containerd[2077]: 2025-12-16 12:15:55.699 [INFO][5219] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95b81e4a56a ContainerID="f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7" Namespace="kube-system" Pod="coredns-66bc5c9577-d7m9p" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--d7m9p-eth0" Dec 16 12:15:55.728333 containerd[2077]: 2025-12-16 12:15:55.706 [INFO][5219] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7" Namespace="kube-system" Pod="coredns-66bc5c9577-d7m9p" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--d7m9p-eth0" Dec 16 12:15:55.728333 containerd[2077]: 2025-12-16 12:15:55.710 [INFO][5219] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7" Namespace="kube-system" Pod="coredns-66bc5c9577-d7m9p" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--d7m9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--d7m9p-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1578eaf6-7b31-404e-823c-f6b50cad689e", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-623de6ebc0", ContainerID:"f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7", Pod:"coredns-66bc5c9577-d7m9p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95b81e4a56a", MAC:"36:19:be:3d:ed:3a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:55.728448 containerd[2077]: 2025-12-16 12:15:55.725 [INFO][5219] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7" Namespace="kube-system" Pod="coredns-66bc5c9577-d7m9p" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-coredns--66bc5c9577--d7m9p-eth0" Dec 16 12:15:55.758000 audit[5308]: NETFILTER_CFG table=filter:133 family=2 entries=50 op=nft_register_chain pid=5308 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:55.758000 audit[5308]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24384 a0=3 a1=ffffeabede40 a2=0 a3=ffffb23e9fa8 items=0 ppid=4647 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:55.758000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:55.770807 systemd-networkd[1659]: califd5fda51e6f: Link UP Dec 16 12:15:55.772224 systemd-networkd[1659]: califd5fda51e6f: Gained carrier Dec 16 12:15:55.789297 containerd[2077]: 2025-12-16 12:15:55.615 [INFO][5230] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--nsvdz-eth0 calico-apiserver-65f8f4c6db- calico-apiserver 934998e2-1bd4-4baf-a9e2-cc5a0c414cea 817 0 2025-12-16 12:15:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65f8f4c6db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-a-623de6ebc0 calico-apiserver-65f8f4c6db-nsvdz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califd5fda51e6f [] [] }} ContainerID="459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292" Namespace="calico-apiserver" Pod="calico-apiserver-65f8f4c6db-nsvdz" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--nsvdz-" Dec 16 12:15:55.789297 containerd[2077]: 2025-12-16 12:15:55.616 [INFO][5230] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292" Namespace="calico-apiserver" Pod="calico-apiserver-65f8f4c6db-nsvdz" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--nsvdz-eth0" Dec 16 12:15:55.789297 containerd[2077]: 2025-12-16 12:15:55.662 [INFO][5275] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292" HandleID="k8s-pod-network.459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292" Workload="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--nsvdz-eth0" Dec 16 12:15:55.789297 containerd[2077]: 2025-12-16 12:15:55.662 [INFO][5275] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292" HandleID="k8s-pod-network.459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292" Workload="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--nsvdz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d940), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-a-623de6ebc0", "pod":"calico-apiserver-65f8f4c6db-nsvdz", "timestamp":"2025-12-16 12:15:55.662189871 +0000 UTC"}, Hostname:"ci-4547.0.0-a-623de6ebc0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:15:55.789297 containerd[2077]: 2025-12-16 12:15:55.662 [INFO][5275] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:15:55.789297 containerd[2077]: 2025-12-16 12:15:55.693 [INFO][5275] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:15:55.789297 containerd[2077]: 2025-12-16 12:15:55.694 [INFO][5275] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-623de6ebc0' Dec 16 12:15:55.789297 containerd[2077]: 2025-12-16 12:15:55.734 [INFO][5275] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.789297 containerd[2077]: 2025-12-16 12:15:55.740 [INFO][5275] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.789297 containerd[2077]: 2025-12-16 12:15:55.743 [INFO][5275] ipam/ipam.go 511: Trying affinity for 192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.789297 containerd[2077]: 2025-12-16 12:15:55.745 [INFO][5275] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.789297 containerd[2077]: 2025-12-16 12:15:55.748 [INFO][5275] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.789297 containerd[2077]: 2025-12-16 12:15:55.749 [INFO][5275] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.789297 containerd[2077]: 2025-12-16 12:15:55.751 [INFO][5275] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292 Dec 16 12:15:55.789297 containerd[2077]: 2025-12-16 12:15:55.756 [INFO][5275] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.789297 containerd[2077]: 2025-12-16 12:15:55.764 [INFO][5275] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.108.70/26] block=192.168.108.64/26 handle="k8s-pod-network.459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.789297 containerd[2077]: 2025-12-16 12:15:55.764 [INFO][5275] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.70/26] handle="k8s-pod-network.459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.789297 containerd[2077]: 2025-12-16 12:15:55.765 [INFO][5275] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:15:55.789297 containerd[2077]: 2025-12-16 12:15:55.765 [INFO][5275] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.108.70/26] IPv6=[] ContainerID="459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292" HandleID="k8s-pod-network.459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292" Workload="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--nsvdz-eth0" Dec 16 12:15:55.791074 containerd[2077]: 2025-12-16 12:15:55.767 [INFO][5230] cni-plugin/k8s.go 418: Populated endpoint ContainerID="459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292" Namespace="calico-apiserver" Pod="calico-apiserver-65f8f4c6db-nsvdz" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--nsvdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--nsvdz-eth0", GenerateName:"calico-apiserver-65f8f4c6db-", Namespace:"calico-apiserver", SelfLink:"", UID:"934998e2-1bd4-4baf-a9e2-cc5a0c414cea", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65f8f4c6db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-623de6ebc0", ContainerID:"", Pod:"calico-apiserver-65f8f4c6db-nsvdz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd5fda51e6f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:55.791074 containerd[2077]: 2025-12-16 12:15:55.767 [INFO][5230] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.70/32] ContainerID="459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292" Namespace="calico-apiserver" Pod="calico-apiserver-65f8f4c6db-nsvdz" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--nsvdz-eth0" Dec 16 12:15:55.791074 containerd[2077]: 2025-12-16 12:15:55.767 [INFO][5230] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd5fda51e6f ContainerID="459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292" Namespace="calico-apiserver" Pod="calico-apiserver-65f8f4c6db-nsvdz" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--nsvdz-eth0" Dec 16 12:15:55.791074 containerd[2077]: 2025-12-16 12:15:55.773 [INFO][5230] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292" Namespace="calico-apiserver" Pod="calico-apiserver-65f8f4c6db-nsvdz" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--nsvdz-eth0" Dec 16 12:15:55.791074 containerd[2077]: 2025-12-16 12:15:55.773 [INFO][5230] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292" Namespace="calico-apiserver" Pod="calico-apiserver-65f8f4c6db-nsvdz" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--nsvdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--nsvdz-eth0", GenerateName:"calico-apiserver-65f8f4c6db-", Namespace:"calico-apiserver", SelfLink:"", UID:"934998e2-1bd4-4baf-a9e2-cc5a0c414cea", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65f8f4c6db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-623de6ebc0", ContainerID:"459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292", Pod:"calico-apiserver-65f8f4c6db-nsvdz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd5fda51e6f", MAC:"ba:1b:cf:2c:ed:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:55.791074 containerd[2077]: 2025-12-16 12:15:55.787 [INFO][5230] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292" Namespace="calico-apiserver" Pod="calico-apiserver-65f8f4c6db-nsvdz" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--apiserver--65f8f4c6db--nsvdz-eth0" Dec 16 12:15:55.803000 audit[5319]: NETFILTER_CFG table=filter:134 family=2 entries=55 op=nft_register_chain pid=5319 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:55.803000 audit[5319]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28288 a0=3 a1=ffffee42fc20 a2=0 a3=ffffa5364fa8 items=0 ppid=4647 pid=5319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:55.803000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:55.832933 containerd[2077]: time="2025-12-16T12:15:55.832888118Z" level=info msg="connecting to shim f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7" address="unix:///run/containerd/s/95997f902c89cb7316f5dbaa661009b07574c38923f6684d4a59c1e42450f7d9" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:55.864036 systemd[1]: Started cri-containerd-f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7.scope - libcontainer container f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7. Dec 16 12:15:55.867906 containerd[2077]: time="2025-12-16T12:15:55.867862239Z" level=info msg="connecting to shim 459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292" address="unix:///run/containerd/s/c3db9aae274de014f958de5c36d578e33ffdb403d9cb3bfbba42c3edb6e496ce" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:55.885000 audit: BPF prog-id=255 op=LOAD Dec 16 12:15:55.888000 audit: BPF prog-id=256 op=LOAD Dec 16 12:15:55.888000 audit[5340]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5329 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:55.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638656364363532643564303665636663306162326166663663373661 Dec 16 12:15:55.888000 audit: BPF prog-id=256 op=UNLOAD Dec 16 12:15:55.888000 audit[5340]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5329 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:55.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638656364363532643564303665636663306162326166663663373661 Dec 16 12:15:55.888000 audit: BPF prog-id=257 op=LOAD Dec 16 12:15:55.888000 audit[5340]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5329 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:55.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638656364363532643564303665636663306162326166663663373661 Dec 16 12:15:55.888000 audit: BPF prog-id=258 op=LOAD Dec 16 12:15:55.888000 audit[5340]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5329 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:55.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638656364363532643564303665636663306162326166663663373661 Dec 16 12:15:55.888000 audit: BPF prog-id=258 op=UNLOAD Dec 16 12:15:55.888000 audit[5340]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5329 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:55.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638656364363532643564303665636663306162326166663663373661 Dec 16 12:15:55.888000 audit: BPF prog-id=257 op=UNLOAD Dec 16 12:15:55.888000 audit[5340]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5329 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:55.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638656364363532643564303665636663306162326166663663373661 Dec 16 12:15:55.888000 audit: BPF prog-id=259 op=LOAD Dec 16 12:15:55.888000 audit[5340]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5329 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:55.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638656364363532643564303665636663306162326166663663373661 Dec 16 12:15:55.896591 systemd-networkd[1659]: calia72dd2dd753: Link UP Dec 16 12:15:55.897016 systemd-networkd[1659]: calia72dd2dd753: Gained carrier Dec 16 12:15:55.918040 systemd[1]: Started cri-containerd-459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292.scope - libcontainer container 459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292. Dec 16 12:15:55.922770 containerd[2077]: 2025-12-16 12:15:55.627 [INFO][5242] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--623de6ebc0-k8s-goldmane--7c778bb748--xnsjr-eth0 goldmane-7c778bb748- calico-system 42e08362-84ba-4be1-b9a5-3a3391796c9d 812 0 2025-12-16 12:15:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547.0.0-a-623de6ebc0 goldmane-7c778bb748-xnsjr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia72dd2dd753 [] [] }} ContainerID="50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e" Namespace="calico-system" Pod="goldmane-7c778bb748-xnsjr" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-goldmane--7c778bb748--xnsjr-" Dec 16 12:15:55.922770 containerd[2077]: 2025-12-16 12:15:55.627 [INFO][5242] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e" Namespace="calico-system" Pod="goldmane-7c778bb748-xnsjr" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-goldmane--7c778bb748--xnsjr-eth0" Dec 16 12:15:55.922770 containerd[2077]: 2025-12-16 12:15:55.691 [INFO][5281] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e" HandleID="k8s-pod-network.50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e" Workload="ci--4547.0.0--a--623de6ebc0-k8s-goldmane--7c778bb748--xnsjr-eth0" Dec 16 12:15:55.922770 containerd[2077]: 2025-12-16 12:15:55.691 [INFO][5281] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e" HandleID="k8s-pod-network.50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e" Workload="ci--4547.0.0--a--623de6ebc0-k8s-goldmane--7c778bb748--xnsjr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3910), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-623de6ebc0", "pod":"goldmane-7c778bb748-xnsjr", "timestamp":"2025-12-16 12:15:55.69152521 +0000 UTC"}, Hostname:"ci-4547.0.0-a-623de6ebc0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:15:55.922770 containerd[2077]: 2025-12-16 12:15:55.691 [INFO][5281] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:15:55.922770 containerd[2077]: 2025-12-16 12:15:55.765 [INFO][5281] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:15:55.922770 containerd[2077]: 2025-12-16 12:15:55.765 [INFO][5281] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-623de6ebc0' Dec 16 12:15:55.922770 containerd[2077]: 2025-12-16 12:15:55.833 [INFO][5281] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.922770 containerd[2077]: 2025-12-16 12:15:55.841 [INFO][5281] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.922770 containerd[2077]: 2025-12-16 12:15:55.846 [INFO][5281] ipam/ipam.go 511: Trying affinity for 192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.922770 containerd[2077]: 2025-12-16 12:15:55.848 [INFO][5281] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.922770 containerd[2077]: 2025-12-16 12:15:55.852 [INFO][5281] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.922770 containerd[2077]: 2025-12-16 12:15:55.853 [INFO][5281] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.922770 containerd[2077]: 2025-12-16 12:15:55.856 [INFO][5281] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e Dec 16 12:15:55.922770 containerd[2077]: 2025-12-16 12:15:55.866 [INFO][5281] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.922770 containerd[2077]: 2025-12-16 12:15:55.882 [INFO][5281] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.108.71/26] block=192.168.108.64/26 handle="k8s-pod-network.50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.922770 containerd[2077]: 2025-12-16 12:15:55.882 [INFO][5281] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.71/26] handle="k8s-pod-network.50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:55.922770 containerd[2077]: 2025-12-16 12:15:55.882 [INFO][5281] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:15:55.922770 containerd[2077]: 2025-12-16 12:15:55.882 [INFO][5281] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.108.71/26] IPv6=[] ContainerID="50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e" HandleID="k8s-pod-network.50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e" Workload="ci--4547.0.0--a--623de6ebc0-k8s-goldmane--7c778bb748--xnsjr-eth0" Dec 16 12:15:55.923174 containerd[2077]: 2025-12-16 12:15:55.891 [INFO][5242] cni-plugin/k8s.go 418: Populated endpoint ContainerID="50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e" Namespace="calico-system" Pod="goldmane-7c778bb748-xnsjr" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-goldmane--7c778bb748--xnsjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--623de6ebc0-k8s-goldmane--7c778bb748--xnsjr-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"42e08362-84ba-4be1-b9a5-3a3391796c9d", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-623de6ebc0", ContainerID:"", Pod:"goldmane-7c778bb748-xnsjr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.108.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia72dd2dd753", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:55.923174 containerd[2077]: 2025-12-16 12:15:55.891 [INFO][5242] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.71/32] ContainerID="50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e" Namespace="calico-system" Pod="goldmane-7c778bb748-xnsjr" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-goldmane--7c778bb748--xnsjr-eth0" Dec 16 12:15:55.923174 containerd[2077]: 2025-12-16 12:15:55.891 [INFO][5242] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia72dd2dd753 ContainerID="50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e" Namespace="calico-system" Pod="goldmane-7c778bb748-xnsjr" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-goldmane--7c778bb748--xnsjr-eth0" Dec 16 12:15:55.923174 containerd[2077]: 2025-12-16 12:15:55.897 [INFO][5242] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e" Namespace="calico-system" Pod="goldmane-7c778bb748-xnsjr" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-goldmane--7c778bb748--xnsjr-eth0" Dec 16 12:15:55.923174 containerd[2077]: 2025-12-16 12:15:55.898 [INFO][5242] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e" Namespace="calico-system" Pod="goldmane-7c778bb748-xnsjr" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-goldmane--7c778bb748--xnsjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--623de6ebc0-k8s-goldmane--7c778bb748--xnsjr-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"42e08362-84ba-4be1-b9a5-3a3391796c9d", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-623de6ebc0", ContainerID:"50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e", Pod:"goldmane-7c778bb748-xnsjr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.108.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia72dd2dd753", MAC:"4a:41:c8:45:51:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:55.923174 containerd[2077]: 2025-12-16 12:15:55.917 [INFO][5242] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e" Namespace="calico-system" Pod="goldmane-7c778bb748-xnsjr" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-goldmane--7c778bb748--xnsjr-eth0" Dec 16 12:15:55.941000 audit: BPF prog-id=260 op=LOAD Dec 16 12:15:55.942000 audit: BPF prog-id=261 op=LOAD Dec 16 12:15:55.942000 audit[5378]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=5361 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:55.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435396264313738646437633137393461653939636434363033383562 Dec 16 12:15:55.943000 audit: BPF prog-id=261 op=UNLOAD Dec 16 12:15:55.943000 audit[5378]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5361 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:55.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435396264313738646437633137393461653939636434363033383562 Dec 16 12:15:55.943000 audit: BPF prog-id=262 op=LOAD Dec 16 12:15:55.943000 audit[5378]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=5361 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:55.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435396264313738646437633137393461653939636434363033383562 Dec 16 12:15:55.943000 audit: BPF prog-id=263 op=LOAD Dec 16 12:15:55.943000 audit[5378]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=5361 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:55.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435396264313738646437633137393461653939636434363033383562 Dec 16 12:15:55.944000 audit: BPF prog-id=263 op=UNLOAD Dec 16 12:15:55.944000 audit[5378]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5361 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:55.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435396264313738646437633137393461653939636434363033383562 Dec 16 12:15:55.944000 audit: BPF prog-id=262 op=UNLOAD Dec 16 12:15:55.944000 audit[5378]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5361 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:55.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435396264313738646437633137393461653939636434363033383562 Dec 16 12:15:55.944000 audit: BPF prog-id=264 op=LOAD Dec 16 12:15:55.944000 audit[5378]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=5361 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:55.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435396264313738646437633137393461653939636434363033383562 Dec 16 12:15:55.960436 containerd[2077]: time="2025-12-16T12:15:55.960221775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d7m9p,Uid:1578eaf6-7b31-404e-823c-f6b50cad689e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7\"" Dec 16 12:15:55.971363 containerd[2077]: time="2025-12-16T12:15:55.971293646Z" level=info msg="CreateContainer within sandbox \"f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:15:55.974000 audit[5416]: NETFILTER_CFG table=filter:135 family=2 entries=56 op=nft_register_chain pid=5416 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:55.974000 audit[5416]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28712 a0=3 a1=ffffcdc25740 a2=0 a3=ffffa5aa6fa8 items=0 ppid=4647 pid=5416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:55.974000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:55.984107 containerd[2077]: time="2025-12-16T12:15:55.983880609Z" level=info msg="connecting to shim 50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e" address="unix:///run/containerd/s/f31264f3f069ece6d2e335b797d1dd9a0f650a5a70666088441271746af40c25" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:55.995403 containerd[2077]: time="2025-12-16T12:15:55.995367314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f8f4c6db-nsvdz,Uid:934998e2-1bd4-4baf-a9e2-cc5a0c414cea,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"459bd178dd7c1794ae99cd460385b642823078298506481405d4595b85c36292\"" Dec 16 12:15:55.998748 containerd[2077]: time="2025-12-16T12:15:55.998715005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:15:56.001016 systemd-networkd[1659]: cali0e9cb7af7fa: Link UP Dec 16 12:15:56.001685 systemd-networkd[1659]: cali0e9cb7af7fa: Gained carrier Dec 16 12:15:56.012895 containerd[2077]: time="2025-12-16T12:15:56.012255338Z" level=info msg="Container 0cd69dacefbfac0b8984ddc7988ef8df3c7cb45612c273a1794bc7c4c2628378: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:15:56.031947 containerd[2077]: time="2025-12-16T12:15:56.031909642Z" level=info msg="CreateContainer within sandbox \"f8ecd652d5d06ecfc0ab2aff6c76a66e8515387f09c5ed5c7c89b947812d0dc7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0cd69dacefbfac0b8984ddc7988ef8df3c7cb45612c273a1794bc7c4c2628378\"" Dec 16 12:15:56.032445 containerd[2077]: 2025-12-16 12:15:55.675 [INFO][5254] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--623de6ebc0-k8s-calico--kube--controllers--6574577cd4--cw4js-eth0 calico-kube-controllers-6574577cd4- calico-system 4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7 813 0 2025-12-16 12:15:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6574577cd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547.0.0-a-623de6ebc0 calico-kube-controllers-6574577cd4-cw4js eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0e9cb7af7fa [] [] }} ContainerID="539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a" Namespace="calico-system" Pod="calico-kube-controllers-6574577cd4-cw4js" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--kube--controllers--6574577cd4--cw4js-" Dec 16 12:15:56.032445 containerd[2077]: 2025-12-16 12:15:55.677 [INFO][5254] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a" Namespace="calico-system" Pod="calico-kube-controllers-6574577cd4-cw4js" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--kube--controllers--6574577cd4--cw4js-eth0" Dec 16 12:15:56.032445 containerd[2077]: 2025-12-16 12:15:55.717 [INFO][5291] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a" HandleID="k8s-pod-network.539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a" Workload="ci--4547.0.0--a--623de6ebc0-k8s-calico--kube--controllers--6574577cd4--cw4js-eth0" Dec 16 12:15:56.032445 containerd[2077]: 2025-12-16 12:15:55.720 [INFO][5291] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a" HandleID="k8s-pod-network.539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a" Workload="ci--4547.0.0--a--623de6ebc0-k8s-calico--kube--controllers--6574577cd4--cw4js-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-623de6ebc0", "pod":"calico-kube-controllers-6574577cd4-cw4js", "timestamp":"2025-12-16 12:15:55.717403123 +0000 UTC"}, Hostname:"ci-4547.0.0-a-623de6ebc0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:15:56.032445 containerd[2077]: 2025-12-16 12:15:55.720 [INFO][5291] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:15:56.032445 containerd[2077]: 2025-12-16 12:15:55.882 [INFO][5291] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:15:56.032445 containerd[2077]: 2025-12-16 12:15:55.882 [INFO][5291] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-623de6ebc0' Dec 16 12:15:56.032445 containerd[2077]: 2025-12-16 12:15:55.935 [INFO][5291] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:56.032445 containerd[2077]: 2025-12-16 12:15:55.947 [INFO][5291] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:56.032445 containerd[2077]: 2025-12-16 12:15:55.953 [INFO][5291] ipam/ipam.go 511: Trying affinity for 192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:56.032445 containerd[2077]: 2025-12-16 12:15:55.957 [INFO][5291] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:56.032445 containerd[2077]: 2025-12-16 12:15:55.961 [INFO][5291] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:56.032445 containerd[2077]: 2025-12-16 12:15:55.961 [INFO][5291] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:56.032445 containerd[2077]: 2025-12-16 12:15:55.964 [INFO][5291] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a Dec 16 12:15:56.032445 containerd[2077]: 2025-12-16 12:15:55.975 [INFO][5291] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:56.032445 containerd[2077]: 2025-12-16 12:15:55.990 [INFO][5291] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.108.72/26] block=192.168.108.64/26 handle="k8s-pod-network.539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:56.032445 containerd[2077]: 2025-12-16 12:15:55.990 [INFO][5291] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.72/26] handle="k8s-pod-network.539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a" host="ci-4547.0.0-a-623de6ebc0" Dec 16 12:15:56.032445 containerd[2077]: 2025-12-16 12:15:55.990 [INFO][5291] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:15:56.032445 containerd[2077]: 2025-12-16 12:15:55.990 [INFO][5291] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.108.72/26] IPv6=[] ContainerID="539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a" HandleID="k8s-pod-network.539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a" Workload="ci--4547.0.0--a--623de6ebc0-k8s-calico--kube--controllers--6574577cd4--cw4js-eth0" Dec 16 12:15:56.032825 containerd[2077]: 2025-12-16 12:15:55.993 [INFO][5254] cni-plugin/k8s.go 418: Populated endpoint ContainerID="539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a" Namespace="calico-system" Pod="calico-kube-controllers-6574577cd4-cw4js" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--kube--controllers--6574577cd4--cw4js-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--623de6ebc0-k8s-calico--kube--controllers--6574577cd4--cw4js-eth0", GenerateName:"calico-kube-controllers-6574577cd4-", Namespace:"calico-system", SelfLink:"", UID:"4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6574577cd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-623de6ebc0", ContainerID:"", Pod:"calico-kube-controllers-6574577cd4-cw4js", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.108.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0e9cb7af7fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:56.032825 containerd[2077]: 2025-12-16 12:15:55.994 [INFO][5254] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.72/32] ContainerID="539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a" Namespace="calico-system" Pod="calico-kube-controllers-6574577cd4-cw4js" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--kube--controllers--6574577cd4--cw4js-eth0" Dec 16 12:15:56.032825 containerd[2077]: 2025-12-16 12:15:55.994 [INFO][5254] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e9cb7af7fa ContainerID="539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a" Namespace="calico-system" Pod="calico-kube-controllers-6574577cd4-cw4js" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--kube--controllers--6574577cd4--cw4js-eth0" Dec 16 12:15:56.032825 containerd[2077]: 2025-12-16 12:15:56.002 [INFO][5254] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a" Namespace="calico-system" Pod="calico-kube-controllers-6574577cd4-cw4js" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--kube--controllers--6574577cd4--cw4js-eth0" Dec 16 12:15:56.032825 containerd[2077]: 2025-12-16 12:15:56.002 [INFO][5254] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a" Namespace="calico-system" Pod="calico-kube-controllers-6574577cd4-cw4js" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--kube--controllers--6574577cd4--cw4js-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--623de6ebc0-k8s-calico--kube--controllers--6574577cd4--cw4js-eth0", GenerateName:"calico-kube-controllers-6574577cd4-", Namespace:"calico-system", SelfLink:"", UID:"4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6574577cd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-623de6ebc0", ContainerID:"539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a", Pod:"calico-kube-controllers-6574577cd4-cw4js", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.108.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0e9cb7af7fa", MAC:"82:3c:74:59:eb:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:56.032825 containerd[2077]: 2025-12-16 12:15:56.022 [INFO][5254] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a" Namespace="calico-system" Pod="calico-kube-controllers-6574577cd4-cw4js" WorkloadEndpoint="ci--4547.0.0--a--623de6ebc0-k8s-calico--kube--controllers--6574577cd4--cw4js-eth0" Dec 16 12:15:56.035298 containerd[2077]: time="2025-12-16T12:15:56.035215738Z" level=info msg="StartContainer for \"0cd69dacefbfac0b8984ddc7988ef8df3c7cb45612c273a1794bc7c4c2628378\"" Dec 16 12:15:56.037950 systemd[1]: Started cri-containerd-50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e.scope - libcontainer container 50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e. Dec 16 12:15:56.039066 containerd[2077]: time="2025-12-16T12:15:56.038693541Z" level=info msg="connecting to shim 0cd69dacefbfac0b8984ddc7988ef8df3c7cb45612c273a1794bc7c4c2628378" address="unix:///run/containerd/s/95997f902c89cb7316f5dbaa661009b07574c38923f6684d4a59c1e42450f7d9" protocol=ttrpc version=3 Dec 16 12:15:56.059000 audit[5480]: NETFILTER_CFG table=filter:136 family=2 entries=52 op=nft_register_chain pid=5480 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:56.059000 audit[5480]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24296 a0=3 a1=fffff3e790e0 a2=0 a3=ffffa57c1fa8 items=0 ppid=4647 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.059000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:56.062937 systemd[1]: Started cri-containerd-0cd69dacefbfac0b8984ddc7988ef8df3c7cb45612c273a1794bc7c4c2628378.scope - libcontainer container 0cd69dacefbfac0b8984ddc7988ef8df3c7cb45612c273a1794bc7c4c2628378. Dec 16 12:15:56.064000 audit: BPF prog-id=265 op=LOAD Dec 16 12:15:56.065000 audit: BPF prog-id=266 op=LOAD Dec 16 12:15:56.065000 audit[5443]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5427 pid=5443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530633866653239643466663237343733633233333830623137363162 Dec 16 12:15:56.066000 audit: BPF prog-id=266 op=UNLOAD Dec 16 12:15:56.066000 audit[5443]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5427 pid=5443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530633866653239643466663237343733633233333830623137363162 Dec 16 12:15:56.066000 audit: BPF prog-id=267 op=LOAD Dec 16 12:15:56.066000 audit[5443]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5427 pid=5443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530633866653239643466663237343733633233333830623137363162 Dec 16 12:15:56.066000 audit: BPF prog-id=268 op=LOAD Dec 16 12:15:56.066000 audit[5443]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5427 pid=5443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530633866653239643466663237343733633233333830623137363162 Dec 16 12:15:56.066000 audit: BPF prog-id=268 op=UNLOAD Dec 16 12:15:56.066000 audit[5443]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5427 pid=5443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530633866653239643466663237343733633233333830623137363162 Dec 16 12:15:56.066000 audit: BPF prog-id=267 op=UNLOAD Dec 16 12:15:56.066000 audit[5443]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5427 pid=5443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530633866653239643466663237343733633233333830623137363162 Dec 16 12:15:56.066000 audit: BPF prog-id=269 op=LOAD Dec 16 12:15:56.066000 audit[5443]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5427 pid=5443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530633866653239643466663237343733633233333830623137363162 Dec 16 12:15:56.077000 audit: BPF prog-id=270 op=LOAD Dec 16 12:15:56.078000 audit: BPF prog-id=271 op=LOAD Dec 16 12:15:56.078000 audit[5461]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=5329 pid=5461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063643639646163656662666163306238393834646463373938386566 Dec 16 12:15:56.078000 audit: BPF prog-id=271 op=UNLOAD Dec 16 12:15:56.078000 audit[5461]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5329 pid=5461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063643639646163656662666163306238393834646463373938386566 Dec 16 12:15:56.078000 audit: BPF prog-id=272 op=LOAD Dec 16 12:15:56.078000 audit[5461]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=5329 pid=5461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063643639646163656662666163306238393834646463373938386566 Dec 16 12:15:56.078000 audit: BPF prog-id=273 op=LOAD Dec 16 12:15:56.078000 audit[5461]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=5329 pid=5461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063643639646163656662666163306238393834646463373938386566 Dec 16 12:15:56.078000 audit: BPF prog-id=273 op=UNLOAD Dec 16 12:15:56.078000 audit[5461]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5329 pid=5461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063643639646163656662666163306238393834646463373938386566 Dec 16 12:15:56.078000 audit: BPF prog-id=272 op=UNLOAD Dec 16 12:15:56.078000 audit[5461]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5329 pid=5461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063643639646163656662666163306238393834646463373938386566 Dec 16 12:15:56.078000 audit: BPF prog-id=274 op=LOAD Dec 16 12:15:56.078000 audit[5461]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=5329 pid=5461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063643639646163656662666163306238393834646463373938386566 Dec 16 12:15:56.095729 containerd[2077]: time="2025-12-16T12:15:56.095594463Z" level=info msg="connecting to shim 539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a" address="unix:///run/containerd/s/9e5f69d470b50ee434cbd02523dc18c9025da01601876068e3fbcbf0af3e1a33" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:56.109848 containerd[2077]: time="2025-12-16T12:15:56.109646131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xnsjr,Uid:42e08362-84ba-4be1-b9a5-3a3391796c9d,Namespace:calico-system,Attempt:0,} returns sandbox id \"50c8fe29d4ff27473c23380b1761b72621782ce9832d3c4acde9eb3f5999b20e\"" Dec 16 12:15:56.111819 containerd[2077]: time="2025-12-16T12:15:56.111157759Z" level=info msg="StartContainer for \"0cd69dacefbfac0b8984ddc7988ef8df3c7cb45612c273a1794bc7c4c2628378\" returns successfully" Dec 16 12:15:56.133961 systemd[1]: Started cri-containerd-539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a.scope - libcontainer container 539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a. Dec 16 12:15:56.155000 audit: BPF prog-id=275 op=LOAD Dec 16 12:15:56.155000 audit: BPF prog-id=276 op=LOAD Dec 16 12:15:56.155000 audit[5524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5500 pid=5524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533396164616363396233393035303962393732396163323863366438 Dec 16 12:15:56.155000 audit: BPF prog-id=276 op=UNLOAD Dec 16 12:15:56.155000 audit[5524]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5500 pid=5524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533396164616363396233393035303962393732396163323863366438 Dec 16 12:15:56.155000 audit: BPF prog-id=277 op=LOAD Dec 16 12:15:56.155000 audit[5524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5500 pid=5524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533396164616363396233393035303962393732396163323863366438 Dec 16 12:15:56.155000 audit: BPF prog-id=278 op=LOAD Dec 16 12:15:56.155000 audit[5524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5500 pid=5524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533396164616363396233393035303962393732396163323863366438 Dec 16 12:15:56.155000 audit: BPF prog-id=278 op=UNLOAD Dec 16 12:15:56.155000 audit[5524]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5500 pid=5524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533396164616363396233393035303962393732396163323863366438 Dec 16 12:15:56.155000 audit: BPF prog-id=277 op=UNLOAD Dec 16 12:15:56.155000 audit[5524]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5500 pid=5524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533396164616363396233393035303962393732396163323863366438 Dec 16 12:15:56.155000 audit: BPF prog-id=279 op=LOAD Dec 16 12:15:56.155000 audit[5524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5500 pid=5524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533396164616363396233393035303962393732396163323863366438 Dec 16 12:15:56.181130 containerd[2077]: time="2025-12-16T12:15:56.181062494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6574577cd4-cw4js,Uid:4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7,Namespace:calico-system,Attempt:0,} returns sandbox id \"539adacc9b390509b9729ac28c6d86bc99f0c983b5c1a2d6ab37ac22f95aa53a\"" Dec 16 12:15:56.266414 containerd[2077]: time="2025-12-16T12:15:56.265649415Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:56.269123 containerd[2077]: time="2025-12-16T12:15:56.269029843Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:15:56.269123 containerd[2077]: time="2025-12-16T12:15:56.269064982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:56.269295 kubelet[3629]: E1216 12:15:56.269253 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:15:56.269382 kubelet[3629]: E1216 12:15:56.269300 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:15:56.269469 kubelet[3629]: E1216 12:15:56.269446 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65f8f4c6db-nsvdz_calico-apiserver(934998e2-1bd4-4baf-a9e2-cc5a0c414cea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:56.269499 kubelet[3629]: E1216 12:15:56.269479 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-nsvdz" podUID="934998e2-1bd4-4baf-a9e2-cc5a0c414cea" Dec 16 12:15:56.270017 containerd[2077]: time="2025-12-16T12:15:56.269929650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:15:56.524184 containerd[2077]: time="2025-12-16T12:15:56.524029385Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:56.527518 containerd[2077]: time="2025-12-16T12:15:56.527427775Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:15:56.527518 containerd[2077]: time="2025-12-16T12:15:56.527470938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:56.527700 kubelet[3629]: E1216 12:15:56.527659 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:15:56.527783 kubelet[3629]: E1216 12:15:56.527705 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:15:56.527935 kubelet[3629]: E1216 12:15:56.527915 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xnsjr_calico-system(42e08362-84ba-4be1-b9a5-3a3391796c9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:56.527988 kubelet[3629]: E1216 12:15:56.527944 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xnsjr" podUID="42e08362-84ba-4be1-b9a5-3a3391796c9d" Dec 16 12:15:56.528413 containerd[2077]: time="2025-12-16T12:15:56.528317581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:15:56.644488 kubelet[3629]: E1216 12:15:56.644337 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xnsjr" podUID="42e08362-84ba-4be1-b9a5-3a3391796c9d" Dec 16 12:15:56.647060 kubelet[3629]: E1216 12:15:56.647022 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-nsvdz" podUID="934998e2-1bd4-4baf-a9e2-cc5a0c414cea" Dec 16 12:15:56.670000 audit[5554]: NETFILTER_CFG table=filter:137 family=2 entries=14 op=nft_register_rule pid=5554 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:56.670000 audit[5554]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe95e1060 a2=0 a3=1 items=0 ppid=3775 pid=5554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.670000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:56.674000 audit[5554]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=5554 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:56.674000 audit[5554]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe95e1060 a2=0 a3=1 items=0 ppid=3775 pid=5554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:56.674000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:56.699166 kubelet[3629]: I1216 12:15:56.699088 3629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-d7m9p" podStartSLOduration=40.699069535 podStartE2EDuration="40.699069535s" podCreationTimestamp="2025-12-16 12:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:15:56.681185947 +0000 UTC m=+45.282178649" watchObservedRunningTime="2025-12-16 12:15:56.699069535 +0000 UTC m=+45.300062237" Dec 16 12:15:56.775642 containerd[2077]: time="2025-12-16T12:15:56.775374330Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:56.778876 containerd[2077]: time="2025-12-16T12:15:56.778727189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:15:56.778876 containerd[2077]: time="2025-12-16T12:15:56.778777048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:56.779212 kubelet[3629]: E1216 12:15:56.779168 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:15:56.779292 kubelet[3629]: E1216 12:15:56.779220 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:15:56.779329 kubelet[3629]: E1216 12:15:56.779289 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6574577cd4-cw4js_calico-system(4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:56.779329 kubelet[3629]: E1216 12:15:56.779317 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6574577cd4-cw4js" podUID="4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7" Dec 16 12:15:56.986932 systemd-networkd[1659]: califd5fda51e6f: Gained IPv6LL Dec 16 12:15:57.050971 systemd-networkd[1659]: cali95b81e4a56a: Gained IPv6LL Dec 16 12:15:57.179951 systemd-networkd[1659]: calia72dd2dd753: Gained IPv6LL Dec 16 12:15:57.434898 systemd-networkd[1659]: cali0e9cb7af7fa: Gained IPv6LL Dec 16 12:15:57.652452 kubelet[3629]: E1216 12:15:57.652411 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6574577cd4-cw4js" podUID="4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7" Dec 16 12:15:57.652976 kubelet[3629]: E1216 12:15:57.652656 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xnsjr" podUID="42e08362-84ba-4be1-b9a5-3a3391796c9d" Dec 16 12:15:57.652976 kubelet[3629]: E1216 12:15:57.652709 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-nsvdz" podUID="934998e2-1bd4-4baf-a9e2-cc5a0c414cea" Dec 16 12:15:57.714000 audit[5557]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=5557 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:57.719588 kernel: kauditd_printk_skb: 227 callbacks suppressed Dec 16 12:15:57.719700 kernel: audit: type=1325 audit(1765887357.714:766): table=filter:139 family=2 entries=14 op=nft_register_rule pid=5557 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:57.714000 audit[5557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd62d0170 a2=0 a3=1 items=0 ppid=3775 pid=5557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:57.750811 kernel: audit: type=1300 audit(1765887357.714:766): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd62d0170 a2=0 a3=1 items=0 ppid=3775 pid=5557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:57.714000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:57.763788 kernel: audit: type=1327 audit(1765887357.714:766): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:57.762000 audit[5557]: NETFILTER_CFG table=nat:140 family=2 entries=56 op=nft_register_chain pid=5557 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:57.762000 audit[5557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffd62d0170 a2=0 a3=1 items=0 ppid=3775 pid=5557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:57.798475 kernel: audit: type=1325 audit(1765887357.762:767): table=nat:140 family=2 entries=56 op=nft_register_chain pid=5557 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:57.798570 kernel: audit: type=1300 audit(1765887357.762:767): arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffd62d0170 a2=0 a3=1 items=0 ppid=3775 pid=5557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:57.762000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:57.807688 kernel: audit: type=1327 audit(1765887357.762:767): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:01.481395 containerd[2077]: time="2025-12-16T12:16:01.481345948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:16:01.780266 containerd[2077]: time="2025-12-16T12:16:01.779907607Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:01.784512 containerd[2077]: time="2025-12-16T12:16:01.784461472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:16:01.784603 containerd[2077]: time="2025-12-16T12:16:01.784569993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:01.784765 kubelet[3629]: E1216 12:16:01.784714 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:16:01.785059 kubelet[3629]: E1216 12:16:01.784776 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:16:01.785059 kubelet[3629]: E1216 12:16:01.784839 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-568467779b-jqkm2_calico-system(a62f9a6a-f8d3-4852-be55-6d4bed6c90c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:01.786010 containerd[2077]: time="2025-12-16T12:16:01.785983340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:16:02.036360 containerd[2077]: time="2025-12-16T12:16:02.036237733Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:02.039613 containerd[2077]: time="2025-12-16T12:16:02.039558931Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:16:02.039721 containerd[2077]: time="2025-12-16T12:16:02.039656115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:02.039931 kubelet[3629]: E1216 12:16:02.039889 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:16:02.040004 kubelet[3629]: E1216 12:16:02.039938 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:16:02.040027 kubelet[3629]: E1216 12:16:02.040007 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-568467779b-jqkm2_calico-system(a62f9a6a-f8d3-4852-be55-6d4bed6c90c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:02.040065 kubelet[3629]: E1216 12:16:02.040040 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-568467779b-jqkm2" podUID="a62f9a6a-f8d3-4852-be55-6d4bed6c90c8" Dec 16 12:16:09.478061 containerd[2077]: time="2025-12-16T12:16:09.477916260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:16:09.772509 containerd[2077]: time="2025-12-16T12:16:09.772183234Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:09.775713 containerd[2077]: time="2025-12-16T12:16:09.775553593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:16:09.775713 containerd[2077]: time="2025-12-16T12:16:09.775645864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:09.775903 kubelet[3629]: E1216 12:16:09.775859 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:16:09.776170 kubelet[3629]: E1216 12:16:09.775908 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:16:09.776879 kubelet[3629]: E1216 12:16:09.776840 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65f8f4c6db-nsvdz_calico-apiserver(934998e2-1bd4-4baf-a9e2-cc5a0c414cea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:09.776919 kubelet[3629]: E1216 12:16:09.776892 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-nsvdz" podUID="934998e2-1bd4-4baf-a9e2-cc5a0c414cea" Dec 16 12:16:09.777454 containerd[2077]: time="2025-12-16T12:16:09.777423923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:16:10.043005 containerd[2077]: time="2025-12-16T12:16:10.042962037Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:10.046309 containerd[2077]: time="2025-12-16T12:16:10.046271400Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:16:10.046410 containerd[2077]: time="2025-12-16T12:16:10.046351654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:10.046543 kubelet[3629]: E1216 12:16:10.046505 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:16:10.046597 kubelet[3629]: E1216 12:16:10.046552 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:16:10.046629 kubelet[3629]: E1216 12:16:10.046615 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-lft87_calico-system(8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:10.048290 containerd[2077]: time="2025-12-16T12:16:10.048095894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:16:10.416724 containerd[2077]: time="2025-12-16T12:16:10.416601962Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:10.420873 containerd[2077]: time="2025-12-16T12:16:10.420723636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:16:10.420873 containerd[2077]: time="2025-12-16T12:16:10.420744933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:10.421053 kubelet[3629]: E1216 12:16:10.421006 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:16:10.421098 kubelet[3629]: E1216 12:16:10.421056 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:16:10.421143 kubelet[3629]: E1216 12:16:10.421122 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-lft87_calico-system(8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:10.421183 kubelet[3629]: E1216 12:16:10.421158 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lft87" podUID="8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85" Dec 16 12:16:10.478342 containerd[2077]: time="2025-12-16T12:16:10.478158241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:16:10.722642 containerd[2077]: time="2025-12-16T12:16:10.722497924Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:10.725934 containerd[2077]: time="2025-12-16T12:16:10.725829040Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:16:10.725934 containerd[2077]: time="2025-12-16T12:16:10.725888701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:10.726128 kubelet[3629]: E1216 12:16:10.726086 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:16:10.726175 kubelet[3629]: E1216 12:16:10.726132 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:16:10.726236 kubelet[3629]: E1216 12:16:10.726211 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65f8f4c6db-glr2q_calico-apiserver(a081eac5-c790-4263-a08c-1af1e10fce20): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:10.726575 kubelet[3629]: E1216 12:16:10.726550 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-glr2q" podUID="a081eac5-c790-4263-a08c-1af1e10fce20" Dec 16 12:16:11.479459 containerd[2077]: time="2025-12-16T12:16:11.478916687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:16:11.745541 containerd[2077]: time="2025-12-16T12:16:11.745390083Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:11.749352 containerd[2077]: time="2025-12-16T12:16:11.749215889Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:16:11.749352 containerd[2077]: time="2025-12-16T12:16:11.749311045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:11.750942 kubelet[3629]: E1216 12:16:11.750895 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:16:11.751510 kubelet[3629]: E1216 12:16:11.750949 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:16:11.751510 kubelet[3629]: E1216 12:16:11.751026 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6574577cd4-cw4js_calico-system(4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:11.751510 kubelet[3629]: E1216 12:16:11.751051 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6574577cd4-cw4js" podUID="4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7" Dec 16 12:16:13.479412 containerd[2077]: time="2025-12-16T12:16:13.479180725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:16:13.742608 containerd[2077]: time="2025-12-16T12:16:13.742472832Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:13.760459 containerd[2077]: time="2025-12-16T12:16:13.760390550Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:16:13.761186 containerd[2077]: time="2025-12-16T12:16:13.760501706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:13.761367 kubelet[3629]: E1216 12:16:13.761282 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:16:13.761367 kubelet[3629]: E1216 12:16:13.761353 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:16:13.762583 kubelet[3629]: E1216 12:16:13.762392 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xnsjr_calico-system(42e08362-84ba-4be1-b9a5-3a3391796c9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:13.762583 kubelet[3629]: E1216 12:16:13.762567 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xnsjr" podUID="42e08362-84ba-4be1-b9a5-3a3391796c9d" Dec 16 12:16:16.478143 kubelet[3629]: E1216 12:16:16.478063 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-568467779b-jqkm2" podUID="a62f9a6a-f8d3-4852-be55-6d4bed6c90c8" Dec 16 12:16:20.478006 kubelet[3629]: E1216 12:16:20.477937 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-nsvdz" podUID="934998e2-1bd4-4baf-a9e2-cc5a0c414cea" Dec 16 12:16:22.479178 kubelet[3629]: E1216 12:16:22.479085 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-glr2q" podUID="a081eac5-c790-4263-a08c-1af1e10fce20" Dec 16 12:16:22.480241 kubelet[3629]: E1216 12:16:22.480053 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lft87" podUID="8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85" Dec 16 12:16:24.477587 kubelet[3629]: E1216 12:16:24.477493 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6574577cd4-cw4js" podUID="4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7" Dec 16 12:16:27.482402 containerd[2077]: time="2025-12-16T12:16:27.482130915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:16:27.483188 kubelet[3629]: E1216 12:16:27.482881 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xnsjr" podUID="42e08362-84ba-4be1-b9a5-3a3391796c9d" Dec 16 12:16:27.742853 containerd[2077]: time="2025-12-16T12:16:27.742064351Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:27.745189 containerd[2077]: time="2025-12-16T12:16:27.745082178Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:16:27.745287 containerd[2077]: time="2025-12-16T12:16:27.745145316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:27.745436 kubelet[3629]: E1216 12:16:27.745391 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:16:27.745485 kubelet[3629]: E1216 12:16:27.745436 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:16:27.745506 kubelet[3629]: E1216 12:16:27.745497 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-568467779b-jqkm2_calico-system(a62f9a6a-f8d3-4852-be55-6d4bed6c90c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:27.748517 containerd[2077]: time="2025-12-16T12:16:27.748492013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:16:27.980679 containerd[2077]: time="2025-12-16T12:16:27.980478514Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:27.986082 containerd[2077]: time="2025-12-16T12:16:27.986042882Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:16:27.986498 containerd[2077]: time="2025-12-16T12:16:27.986087524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:27.986569 kubelet[3629]: E1216 12:16:27.986524 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:16:27.986620 kubelet[3629]: E1216 12:16:27.986575 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:16:27.986736 kubelet[3629]: E1216 12:16:27.986716 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-568467779b-jqkm2_calico-system(a62f9a6a-f8d3-4852-be55-6d4bed6c90c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:27.986936 kubelet[3629]: E1216 12:16:27.986861 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-568467779b-jqkm2" podUID="a62f9a6a-f8d3-4852-be55-6d4bed6c90c8" Dec 16 12:16:33.482803 containerd[2077]: time="2025-12-16T12:16:33.481026553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:16:33.765025 containerd[2077]: time="2025-12-16T12:16:33.764899207Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:33.770514 containerd[2077]: time="2025-12-16T12:16:33.770416355Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:16:33.770514 containerd[2077]: time="2025-12-16T12:16:33.770483824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:33.770845 kubelet[3629]: E1216 12:16:33.770796 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:16:33.771188 kubelet[3629]: E1216 12:16:33.770851 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:16:33.771188 kubelet[3629]: E1216 12:16:33.770926 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-lft87_calico-system(8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:33.772727 containerd[2077]: time="2025-12-16T12:16:33.772671770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:16:34.063936 containerd[2077]: time="2025-12-16T12:16:34.063884012Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:34.067795 containerd[2077]: time="2025-12-16T12:16:34.067685048Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:16:34.067795 containerd[2077]: time="2025-12-16T12:16:34.067733875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:34.068110 kubelet[3629]: E1216 12:16:34.068060 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:16:34.068166 kubelet[3629]: E1216 12:16:34.068114 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:16:34.068204 kubelet[3629]: E1216 12:16:34.068183 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-lft87_calico-system(8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:34.068255 kubelet[3629]: E1216 12:16:34.068232 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lft87" podUID="8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85" Dec 16 12:16:34.479038 containerd[2077]: time="2025-12-16T12:16:34.478559513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:16:34.790565 containerd[2077]: time="2025-12-16T12:16:34.790351244Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:34.794291 containerd[2077]: time="2025-12-16T12:16:34.794251711Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:16:34.794415 containerd[2077]: time="2025-12-16T12:16:34.794334460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:34.794663 kubelet[3629]: E1216 12:16:34.794560 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:16:34.794663 kubelet[3629]: E1216 12:16:34.794626 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:16:34.795106 kubelet[3629]: E1216 12:16:34.795031 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65f8f4c6db-nsvdz_calico-apiserver(934998e2-1bd4-4baf-a9e2-cc5a0c414cea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:34.795106 kubelet[3629]: E1216 12:16:34.795068 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-nsvdz" podUID="934998e2-1bd4-4baf-a9e2-cc5a0c414cea" Dec 16 12:16:36.479022 containerd[2077]: time="2025-12-16T12:16:36.478927587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:16:36.763810 containerd[2077]: time="2025-12-16T12:16:36.763525851Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:36.766879 containerd[2077]: time="2025-12-16T12:16:36.766773389Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:16:36.766978 containerd[2077]: time="2025-12-16T12:16:36.766863379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:36.767144 kubelet[3629]: E1216 12:16:36.767106 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:16:36.767413 kubelet[3629]: E1216 12:16:36.767151 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:16:36.767413 kubelet[3629]: E1216 12:16:36.767211 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65f8f4c6db-glr2q_calico-apiserver(a081eac5-c790-4263-a08c-1af1e10fce20): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:36.767413 kubelet[3629]: E1216 12:16:36.767238 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-glr2q" podUID="a081eac5-c790-4263-a08c-1af1e10fce20" Dec 16 12:16:39.479342 containerd[2077]: time="2025-12-16T12:16:39.479203689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:16:39.749157 containerd[2077]: time="2025-12-16T12:16:39.748997343Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:39.753551 containerd[2077]: time="2025-12-16T12:16:39.753494805Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:16:39.753669 containerd[2077]: time="2025-12-16T12:16:39.753574706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:39.754853 kubelet[3629]: E1216 12:16:39.754809 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:16:39.755608 kubelet[3629]: E1216 12:16:39.755182 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:16:39.755702 kubelet[3629]: E1216 12:16:39.755683 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6574577cd4-cw4js_calico-system(4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:39.755789 kubelet[3629]: E1216 12:16:39.755770 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6574577cd4-cw4js" podUID="4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7" Dec 16 12:16:40.478291 containerd[2077]: time="2025-12-16T12:16:40.478256376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:16:40.779861 containerd[2077]: time="2025-12-16T12:16:40.778972275Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:40.783170 containerd[2077]: time="2025-12-16T12:16:40.783125730Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:16:40.783409 containerd[2077]: time="2025-12-16T12:16:40.783150132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:40.783610 kubelet[3629]: E1216 12:16:40.783506 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:16:40.783610 kubelet[3629]: E1216 12:16:40.783562 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:16:40.784072 kubelet[3629]: E1216 12:16:40.783942 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xnsjr_calico-system(42e08362-84ba-4be1-b9a5-3a3391796c9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:40.784072 kubelet[3629]: E1216 12:16:40.784006 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xnsjr" podUID="42e08362-84ba-4be1-b9a5-3a3391796c9d" Dec 16 12:16:41.482033 kubelet[3629]: E1216 12:16:41.480136 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-568467779b-jqkm2" podUID="a62f9a6a-f8d3-4852-be55-6d4bed6c90c8" Dec 16 12:16:45.481527 kubelet[3629]: E1216 12:16:45.481220 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lft87" podUID="8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85" Dec 16 12:16:49.478867 kubelet[3629]: E1216 12:16:49.478603 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-nsvdz" podUID="934998e2-1bd4-4baf-a9e2-cc5a0c414cea" Dec 16 12:16:51.480311 kubelet[3629]: E1216 12:16:51.480225 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-glr2q" podUID="a081eac5-c790-4263-a08c-1af1e10fce20" Dec 16 12:16:51.483777 kubelet[3629]: E1216 12:16:51.482858 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6574577cd4-cw4js" podUID="4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7" Dec 16 12:16:53.480912 kubelet[3629]: E1216 12:16:53.480470 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xnsjr" podUID="42e08362-84ba-4be1-b9a5-3a3391796c9d" Dec 16 12:16:54.163517 systemd[1]: Started sshd@7-10.200.20.36:22-10.200.16.10:43550.service - OpenSSH per-connection server daemon (10.200.16.10:43550). Dec 16 12:16:54.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.36:22-10.200.16.10:43550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:54.183786 kernel: audit: type=1130 audit(1765887414.163:768): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.36:22-10.200.16.10:43550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:54.483320 kubelet[3629]: E1216 12:16:54.482937 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-568467779b-jqkm2" podUID="a62f9a6a-f8d3-4852-be55-6d4bed6c90c8" Dec 16 12:16:54.574000 audit[5654]: USER_ACCT pid=5654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:54.593151 sshd[5654]: Accepted publickey for core from 10.200.16.10 port 43550 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:16:54.595296 sshd-session[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:16:54.593000 audit[5654]: CRED_ACQ pid=5654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:54.613298 kernel: audit: type=1101 audit(1765887414.574:769): pid=5654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:54.613371 kernel: audit: type=1103 audit(1765887414.593:770): pid=5654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:54.618126 systemd-logind[2038]: New session 11 of user core. Dec 16 12:16:54.628689 kernel: audit: type=1006 audit(1765887414.593:771): pid=5654 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 12:16:54.593000 audit[5654]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd671d620 a2=3 a3=0 items=0 ppid=1 pid=5654 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:54.647996 kernel: audit: type=1300 audit(1765887414.593:771): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd671d620 a2=3 a3=0 items=0 ppid=1 pid=5654 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:54.647290 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:16:54.593000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:16:54.657317 kernel: audit: type=1327 audit(1765887414.593:771): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:16:54.658000 audit[5654]: USER_START pid=5654 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:54.680000 audit[5658]: CRED_ACQ pid=5658 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:54.697644 kernel: audit: type=1105 audit(1765887414.658:772): pid=5654 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:54.697913 kernel: audit: type=1103 audit(1765887414.680:773): pid=5658 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:54.886278 sshd[5658]: Connection closed by 10.200.16.10 port 43550 Dec 16 12:16:54.887204 sshd-session[5654]: pam_unix(sshd:session): session closed for user core Dec 16 12:16:54.890000 audit[5654]: USER_END pid=5654 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:54.916187 systemd[1]: sshd@7-10.200.20.36:22-10.200.16.10:43550.service: Deactivated successfully. Dec 16 12:16:54.922564 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:16:54.923221 systemd-logind[2038]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:16:54.926783 systemd-logind[2038]: Removed session 11. Dec 16 12:16:54.890000 audit[5654]: CRED_DISP pid=5654 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:54.949983 kernel: audit: type=1106 audit(1765887414.890:774): pid=5654 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:54.950099 kernel: audit: type=1104 audit(1765887414.890:775): pid=5654 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:54.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.36:22-10.200.16.10:43550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:59.977010 systemd[1]: Started sshd@8-10.200.20.36:22-10.200.16.10:43554.service - OpenSSH per-connection server daemon (10.200.16.10:43554). Dec 16 12:16:59.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.36:22-10.200.16.10:43554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:59.980145 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:16:59.980216 kernel: audit: type=1130 audit(1765887419.975:777): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.36:22-10.200.16.10:43554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:00.419000 audit[5671]: USER_ACCT pid=5671 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:00.421899 sshd[5671]: Accepted publickey for core from 10.200.16.10 port 43554 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:00.437000 audit[5671]: CRED_ACQ pid=5671 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:00.440242 sshd-session[5671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:00.454478 kernel: audit: type=1101 audit(1765887420.419:778): pid=5671 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:00.454599 kernel: audit: type=1103 audit(1765887420.437:779): pid=5671 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:00.464635 kernel: audit: type=1006 audit(1765887420.437:780): pid=5671 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 12:17:00.437000 audit[5671]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcdcafdd0 a2=3 a3=0 items=0 ppid=1 pid=5671 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:00.468813 systemd-logind[2038]: New session 12 of user core. Dec 16 12:17:00.480970 kubelet[3629]: E1216 12:17:00.480865 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lft87" podUID="8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85" Dec 16 12:17:00.482468 kernel: audit: type=1300 audit(1765887420.437:780): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcdcafdd0 a2=3 a3=0 items=0 ppid=1 pid=5671 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:00.437000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:00.489224 kernel: audit: type=1327 audit(1765887420.437:780): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:00.494062 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:17:00.497000 audit[5671]: USER_START pid=5671 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:00.499000 audit[5675]: CRED_ACQ pid=5675 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:00.533545 kernel: audit: type=1105 audit(1765887420.497:781): pid=5671 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:00.533591 kernel: audit: type=1103 audit(1765887420.499:782): pid=5675 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:00.719794 sshd[5675]: Connection closed by 10.200.16.10 port 43554 Dec 16 12:17:00.720944 sshd-session[5671]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:00.721000 audit[5671]: USER_END pid=5671 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:00.726818 systemd[1]: sshd@8-10.200.20.36:22-10.200.16.10:43554.service: Deactivated successfully. Dec 16 12:17:00.729136 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:17:00.721000 audit[5671]: CRED_DISP pid=5671 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:00.744557 systemd-logind[2038]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:17:00.746871 systemd-logind[2038]: Removed session 12. Dec 16 12:17:00.758011 kernel: audit: type=1106 audit(1765887420.721:783): pid=5671 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:00.758596 kernel: audit: type=1104 audit(1765887420.721:784): pid=5671 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:00.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.36:22-10.200.16.10:43554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:02.478459 kubelet[3629]: E1216 12:17:02.477916 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-nsvdz" podUID="934998e2-1bd4-4baf-a9e2-cc5a0c414cea" Dec 16 12:17:02.480217 kubelet[3629]: E1216 12:17:02.480188 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6574577cd4-cw4js" podUID="4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7" Dec 16 12:17:05.479634 kubelet[3629]: E1216 12:17:05.479250 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xnsjr" podUID="42e08362-84ba-4be1-b9a5-3a3391796c9d" Dec 16 12:17:05.806137 systemd[1]: Started sshd@9-10.200.20.36:22-10.200.16.10:42202.service - OpenSSH per-connection server daemon (10.200.16.10:42202). Dec 16 12:17:05.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.36:22-10.200.16.10:42202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:05.811300 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:17:05.811397 kernel: audit: type=1130 audit(1765887425.805:786): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.36:22-10.200.16.10:42202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:06.230000 audit[5689]: USER_ACCT pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:06.250680 sshd[5689]: Accepted publickey for core from 10.200.16.10 port 42202 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:06.250226 sshd-session[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:06.248000 audit[5689]: CRED_ACQ pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:06.270133 kernel: audit: type=1101 audit(1765887426.230:787): pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:06.270226 kernel: audit: type=1103 audit(1765887426.248:788): pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:06.274860 systemd-logind[2038]: New session 13 of user core. Dec 16 12:17:06.280340 kernel: audit: type=1006 audit(1765887426.248:789): pid=5689 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 12:17:06.248000 audit[5689]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd2edec50 a2=3 a3=0 items=0 ppid=1 pid=5689 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:06.300544 kernel: audit: type=1300 audit(1765887426.248:789): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd2edec50 a2=3 a3=0 items=0 ppid=1 pid=5689 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:06.248000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:06.302043 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:17:06.307743 kernel: audit: type=1327 audit(1765887426.248:789): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:06.305000 audit[5689]: USER_START pid=5689 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:06.331537 kernel: audit: type=1105 audit(1765887426.305:790): pid=5689 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:06.333000 audit[5693]: CRED_ACQ pid=5693 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:06.349634 kernel: audit: type=1103 audit(1765887426.333:791): pid=5693 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:06.480329 kubelet[3629]: E1216 12:17:06.480202 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-glr2q" podUID="a081eac5-c790-4263-a08c-1af1e10fce20" Dec 16 12:17:06.578154 sshd[5693]: Connection closed by 10.200.16.10 port 42202 Dec 16 12:17:06.578001 sshd-session[5689]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:06.579000 audit[5689]: USER_END pid=5689 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:06.583683 systemd[1]: sshd@9-10.200.20.36:22-10.200.16.10:42202.service: Deactivated successfully. Dec 16 12:17:06.585663 systemd-logind[2038]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:17:06.587192 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:17:06.589658 systemd-logind[2038]: Removed session 13. Dec 16 12:17:06.579000 audit[5689]: CRED_DISP pid=5689 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:06.621230 kernel: audit: type=1106 audit(1765887426.579:792): pid=5689 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:06.621367 kernel: audit: type=1104 audit(1765887426.579:793): pid=5689 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:06.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.36:22-10.200.16.10:42202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:06.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.36:22-10.200.16.10:42218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:06.671908 systemd[1]: Started sshd@10-10.200.20.36:22-10.200.16.10:42218.service - OpenSSH per-connection server daemon (10.200.16.10:42218). Dec 16 12:17:07.094000 audit[5711]: USER_ACCT pid=5711 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:07.095464 sshd[5711]: Accepted publickey for core from 10.200.16.10 port 42218 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:07.095000 audit[5711]: CRED_ACQ pid=5711 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:07.095000 audit[5711]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffca738840 a2=3 a3=0 items=0 ppid=1 pid=5711 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:07.095000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:07.096918 sshd-session[5711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:07.102497 systemd-logind[2038]: New session 14 of user core. Dec 16 12:17:07.111942 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:17:07.114000 audit[5711]: USER_START pid=5711 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:07.116000 audit[5715]: CRED_ACQ pid=5715 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:07.424933 sshd[5715]: Connection closed by 10.200.16.10 port 42218 Dec 16 12:17:07.426089 sshd-session[5711]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:07.428000 audit[5711]: USER_END pid=5711 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:07.428000 audit[5711]: CRED_DISP pid=5711 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:07.431373 systemd-logind[2038]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:17:07.432009 systemd[1]: sshd@10-10.200.20.36:22-10.200.16.10:42218.service: Deactivated successfully. Dec 16 12:17:07.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.36:22-10.200.16.10:42218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:07.435105 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:17:07.438894 systemd-logind[2038]: Removed session 14. Dec 16 12:17:07.523011 systemd[1]: Started sshd@11-10.200.20.36:22-10.200.16.10:42232.service - OpenSSH per-connection server daemon (10.200.16.10:42232). Dec 16 12:17:07.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.36:22-10.200.16.10:42232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:07.946000 audit[5724]: USER_ACCT pid=5724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:07.948532 sshd[5724]: Accepted publickey for core from 10.200.16.10 port 42232 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:07.947000 audit[5724]: CRED_ACQ pid=5724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:07.948000 audit[5724]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff6b4c270 a2=3 a3=0 items=0 ppid=1 pid=5724 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:07.948000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:07.949400 sshd-session[5724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:07.956018 systemd-logind[2038]: New session 15 of user core. Dec 16 12:17:07.960436 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:17:07.964000 audit[5724]: USER_START pid=5724 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:07.965000 audit[5728]: CRED_ACQ pid=5728 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:08.228886 sshd[5728]: Connection closed by 10.200.16.10 port 42232 Dec 16 12:17:08.228300 sshd-session[5724]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:08.228000 audit[5724]: USER_END pid=5724 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:08.228000 audit[5724]: CRED_DISP pid=5724 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:08.231822 systemd-logind[2038]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:17:08.232432 systemd[1]: sshd@11-10.200.20.36:22-10.200.16.10:42232.service: Deactivated successfully. Dec 16 12:17:08.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.36:22-10.200.16.10:42232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:08.234371 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:17:08.236787 systemd-logind[2038]: Removed session 15. Dec 16 12:17:09.479265 containerd[2077]: time="2025-12-16T12:17:09.478033527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:17:09.748037 containerd[2077]: time="2025-12-16T12:17:09.747893772Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:09.753940 containerd[2077]: time="2025-12-16T12:17:09.753894996Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:17:09.755236 containerd[2077]: time="2025-12-16T12:17:09.753983510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:09.755277 kubelet[3629]: E1216 12:17:09.754139 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:17:09.755277 kubelet[3629]: E1216 12:17:09.754183 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:17:09.755277 kubelet[3629]: E1216 12:17:09.754277 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-568467779b-jqkm2_calico-system(a62f9a6a-f8d3-4852-be55-6d4bed6c90c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:09.757528 containerd[2077]: time="2025-12-16T12:17:09.756739825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:17:10.036827 containerd[2077]: time="2025-12-16T12:17:10.036685766Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:10.040096 containerd[2077]: time="2025-12-16T12:17:10.040051764Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:17:10.040190 containerd[2077]: time="2025-12-16T12:17:10.040144482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:10.040530 kubelet[3629]: E1216 12:17:10.040330 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:17:10.040530 kubelet[3629]: E1216 12:17:10.040382 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:17:10.040530 kubelet[3629]: E1216 12:17:10.040455 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-568467779b-jqkm2_calico-system(a62f9a6a-f8d3-4852-be55-6d4bed6c90c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:10.040530 kubelet[3629]: E1216 12:17:10.040493 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-568467779b-jqkm2" podUID="a62f9a6a-f8d3-4852-be55-6d4bed6c90c8" Dec 16 12:17:12.478316 kubelet[3629]: E1216 12:17:12.478250 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lft87" podUID="8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85" Dec 16 12:17:13.256559 update_engine[2039]: I20251216 12:17:13.255882 2039 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 16 12:17:13.256559 update_engine[2039]: I20251216 12:17:13.255934 2039 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 16 12:17:13.256559 update_engine[2039]: I20251216 12:17:13.256142 2039 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 16 12:17:13.256559 update_engine[2039]: I20251216 12:17:13.256453 2039 omaha_request_params.cc:62] Current group set to alpha Dec 16 12:17:13.257610 update_engine[2039]: I20251216 12:17:13.257514 2039 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 16 12:17:13.257925 update_engine[2039]: I20251216 12:17:13.257834 2039 update_attempter.cc:643] Scheduling an action processor start. Dec 16 12:17:13.257925 update_engine[2039]: I20251216 12:17:13.257864 2039 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 12:17:13.263499 update_engine[2039]: I20251216 12:17:13.262702 2039 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 16 12:17:13.263499 update_engine[2039]: I20251216 12:17:13.262830 2039 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 12:17:13.263499 update_engine[2039]: I20251216 12:17:13.262837 2039 omaha_request_action.cc:272] Request: Dec 16 12:17:13.263499 update_engine[2039]: Dec 16 12:17:13.263499 update_engine[2039]: Dec 16 12:17:13.263499 update_engine[2039]: Dec 16 12:17:13.263499 update_engine[2039]: Dec 16 12:17:13.263499 update_engine[2039]: Dec 16 12:17:13.263499 update_engine[2039]: Dec 16 12:17:13.263499 update_engine[2039]: Dec 16 12:17:13.263499 update_engine[2039]: Dec 16 12:17:13.263499 update_engine[2039]: I20251216 12:17:13.262843 2039 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 12:17:13.263929 locksmithd[2114]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 16 12:17:13.265349 update_engine[2039]: I20251216 12:17:13.264816 2039 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 12:17:13.265349 update_engine[2039]: I20251216 12:17:13.265303 2039 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 12:17:13.317097 systemd[1]: Started sshd@12-10.200.20.36:22-10.200.16.10:55894.service - OpenSSH per-connection server daemon (10.200.16.10:55894). Dec 16 12:17:13.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.36:22-10.200.16.10:55894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:13.320767 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 12:17:13.320842 kernel: audit: type=1130 audit(1765887433.316:813): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.36:22-10.200.16.10:55894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:13.368679 update_engine[2039]: E20251216 12:17:13.368515 2039 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 12:17:13.368679 update_engine[2039]: I20251216 12:17:13.368621 2039 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 16 12:17:13.753000 audit[5748]: USER_ACCT pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:13.771958 sshd[5748]: Accepted publickey for core from 10.200.16.10 port 55894 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:13.772785 kernel: audit: type=1101 audit(1765887433.753:814): pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:13.772000 audit[5748]: CRED_ACQ pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:13.789669 sshd-session[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:13.798988 kernel: audit: type=1103 audit(1765887433.772:815): pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:13.799067 kernel: audit: type=1006 audit(1765887433.772:816): pid=5748 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 12:17:13.772000 audit[5748]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcb3e4340 a2=3 a3=0 items=0 ppid=1 pid=5748 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:13.805664 systemd-logind[2038]: New session 16 of user core. Dec 16 12:17:13.820312 kernel: audit: type=1300 audit(1765887433.772:816): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcb3e4340 a2=3 a3=0 items=0 ppid=1 pid=5748 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:13.828566 kernel: audit: type=1327 audit(1765887433.772:816): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:13.772000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:13.828980 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:17:13.831000 audit[5748]: USER_START pid=5748 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:13.852801 kernel: audit: type=1105 audit(1765887433.831:817): pid=5748 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:13.854000 audit[5752]: CRED_ACQ pid=5752 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:13.871793 kernel: audit: type=1103 audit(1765887433.854:818): pid=5752 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:14.087551 sshd[5752]: Connection closed by 10.200.16.10 port 55894 Dec 16 12:17:14.087859 sshd-session[5748]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:14.089000 audit[5748]: USER_END pid=5748 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:14.111358 systemd[1]: sshd@12-10.200.20.36:22-10.200.16.10:55894.service: Deactivated successfully. Dec 16 12:17:14.113508 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:17:14.115714 systemd-logind[2038]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:17:14.089000 audit[5748]: CRED_DISP pid=5748 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:14.129637 kernel: audit: type=1106 audit(1765887434.089:819): pid=5748 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:14.129746 kernel: audit: type=1104 audit(1765887434.089:820): pid=5748 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:14.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.36:22-10.200.16.10:55894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:14.131705 systemd-logind[2038]: Removed session 16. Dec 16 12:17:14.477968 kubelet[3629]: E1216 12:17:14.477506 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6574577cd4-cw4js" podUID="4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7" Dec 16 12:17:16.477464 kubelet[3629]: E1216 12:17:16.477253 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xnsjr" podUID="42e08362-84ba-4be1-b9a5-3a3391796c9d" Dec 16 12:17:16.478567 containerd[2077]: time="2025-12-16T12:17:16.477675710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:17:16.824505 containerd[2077]: time="2025-12-16T12:17:16.824450380Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:16.827714 containerd[2077]: time="2025-12-16T12:17:16.827671900Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:17:16.827933 containerd[2077]: time="2025-12-16T12:17:16.827794795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:16.828019 kubelet[3629]: E1216 12:17:16.827973 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:17:16.828062 kubelet[3629]: E1216 12:17:16.828023 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:17:16.828115 kubelet[3629]: E1216 12:17:16.828094 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65f8f4c6db-nsvdz_calico-apiserver(934998e2-1bd4-4baf-a9e2-cc5a0c414cea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:16.828148 kubelet[3629]: E1216 12:17:16.828124 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-nsvdz" podUID="934998e2-1bd4-4baf-a9e2-cc5a0c414cea" Dec 16 12:17:19.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.36:22-10.200.16.10:55910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:19.174692 systemd[1]: Started sshd@13-10.200.20.36:22-10.200.16.10:55910.service - OpenSSH per-connection server daemon (10.200.16.10:55910). Dec 16 12:17:19.178652 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:17:19.178716 kernel: audit: type=1130 audit(1765887439.173:822): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.36:22-10.200.16.10:55910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:19.479706 containerd[2077]: time="2025-12-16T12:17:19.478821509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:17:19.612000 audit[5767]: USER_ACCT pid=5767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:19.630198 sshd[5767]: Accepted publickey for core from 10.200.16.10 port 55910 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:19.629000 audit[5767]: CRED_ACQ pid=5767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:19.631687 sshd-session[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:19.645787 kernel: audit: type=1101 audit(1765887439.612:823): pid=5767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:19.645882 kernel: audit: type=1103 audit(1765887439.629:824): pid=5767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:19.654335 systemd-logind[2038]: New session 17 of user core. Dec 16 12:17:19.655821 kernel: audit: type=1006 audit(1765887439.629:825): pid=5767 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 12:17:19.629000 audit[5767]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffce16080 a2=3 a3=0 items=0 ppid=1 pid=5767 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:19.673984 kernel: audit: type=1300 audit(1765887439.629:825): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffce16080 a2=3 a3=0 items=0 ppid=1 pid=5767 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:19.629000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:19.680969 kernel: audit: type=1327 audit(1765887439.629:825): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:19.682029 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:17:19.683000 audit[5767]: USER_START pid=5767 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:19.703000 audit[5771]: CRED_ACQ pid=5771 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:19.719913 kernel: audit: type=1105 audit(1765887439.683:826): pid=5767 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:19.720028 kernel: audit: type=1103 audit(1765887439.703:827): pid=5771 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:19.725791 containerd[2077]: time="2025-12-16T12:17:19.725694309Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:19.731341 containerd[2077]: time="2025-12-16T12:17:19.731165576Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:17:19.731341 containerd[2077]: time="2025-12-16T12:17:19.731221139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:19.731691 kubelet[3629]: E1216 12:17:19.731413 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:17:19.731691 kubelet[3629]: E1216 12:17:19.731454 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:17:19.731691 kubelet[3629]: E1216 12:17:19.731517 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65f8f4c6db-glr2q_calico-apiserver(a081eac5-c790-4263-a08c-1af1e10fce20): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:19.731691 kubelet[3629]: E1216 12:17:19.731541 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-glr2q" podUID="a081eac5-c790-4263-a08c-1af1e10fce20" Dec 16 12:17:19.908354 sshd[5771]: Connection closed by 10.200.16.10 port 55910 Dec 16 12:17:19.909710 sshd-session[5767]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:19.909000 audit[5767]: USER_END pid=5767 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:19.919132 systemd[1]: sshd@13-10.200.20.36:22-10.200.16.10:55910.service: Deactivated successfully. Dec 16 12:17:19.922783 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:17:19.926052 systemd-logind[2038]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:17:19.927802 systemd-logind[2038]: Removed session 17. Dec 16 12:17:19.909000 audit[5767]: CRED_DISP pid=5767 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:19.947874 kernel: audit: type=1106 audit(1765887439.909:828): pid=5767 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:19.947978 kernel: audit: type=1104 audit(1765887439.909:829): pid=5767 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:19.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.36:22-10.200.16.10:55910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:23.255552 update_engine[2039]: I20251216 12:17:23.254794 2039 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 12:17:23.255552 update_engine[2039]: I20251216 12:17:23.254899 2039 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 12:17:23.255552 update_engine[2039]: I20251216 12:17:23.255259 2039 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 12:17:23.302105 update_engine[2039]: E20251216 12:17:23.301959 2039 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 12:17:23.302105 update_engine[2039]: I20251216 12:17:23.302075 2039 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 16 12:17:23.478737 containerd[2077]: time="2025-12-16T12:17:23.478440918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:17:23.864701 containerd[2077]: time="2025-12-16T12:17:23.864569951Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:23.868874 containerd[2077]: time="2025-12-16T12:17:23.868815192Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:17:23.869108 containerd[2077]: time="2025-12-16T12:17:23.868848937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:23.869297 kubelet[3629]: E1216 12:17:23.869236 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:17:23.869562 kubelet[3629]: E1216 12:17:23.869299 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:17:23.869562 kubelet[3629]: E1216 12:17:23.869385 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-lft87_calico-system(8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:23.871251 containerd[2077]: time="2025-12-16T12:17:23.871222707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:17:24.119841 containerd[2077]: time="2025-12-16T12:17:24.119354701Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:24.122498 containerd[2077]: time="2025-12-16T12:17:24.122464199Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:17:24.122612 containerd[2077]: time="2025-12-16T12:17:24.122487512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:24.122822 kubelet[3629]: E1216 12:17:24.122728 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:17:24.122899 kubelet[3629]: E1216 12:17:24.122829 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:17:24.123024 kubelet[3629]: E1216 12:17:24.122894 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-lft87_calico-system(8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:24.123024 kubelet[3629]: E1216 12:17:24.122932 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lft87" podUID="8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85" Dec 16 12:17:24.480392 kubelet[3629]: E1216 12:17:24.479889 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-568467779b-jqkm2" podUID="a62f9a6a-f8d3-4852-be55-6d4bed6c90c8" Dec 16 12:17:24.993995 systemd[1]: Started sshd@14-10.200.20.36:22-10.200.16.10:50870.service - OpenSSH per-connection server daemon (10.200.16.10:50870). Dec 16 12:17:24.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.36:22-10.200.16.10:50870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:24.997619 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:17:24.997659 kernel: audit: type=1130 audit(1765887444.993:831): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.36:22-10.200.16.10:50870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:25.405000 audit[5824]: USER_ACCT pid=5824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:25.407768 sshd[5824]: Accepted publickey for core from 10.200.16.10 port 50870 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:25.424733 sshd-session[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:25.423000 audit[5824]: CRED_ACQ pid=5824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:25.440601 kernel: audit: type=1101 audit(1765887445.405:832): pid=5824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:25.440696 kernel: audit: type=1103 audit(1765887445.423:833): pid=5824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:25.451659 kernel: audit: type=1006 audit(1765887445.423:834): pid=5824 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 16 12:17:25.423000 audit[5824]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd3fdcb60 a2=3 a3=0 items=0 ppid=1 pid=5824 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:25.456965 systemd-logind[2038]: New session 18 of user core. Dec 16 12:17:25.470742 kernel: audit: type=1300 audit(1765887445.423:834): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd3fdcb60 a2=3 a3=0 items=0 ppid=1 pid=5824 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:25.423000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:25.478075 kernel: audit: type=1327 audit(1765887445.423:834): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:25.479004 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:17:25.484710 containerd[2077]: time="2025-12-16T12:17:25.484229087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:17:25.487000 audit[5824]: USER_START pid=5824 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:25.497000 audit[5835]: CRED_ACQ pid=5835 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:25.525444 kernel: audit: type=1105 audit(1765887445.487:835): pid=5824 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:25.525566 kernel: audit: type=1103 audit(1765887445.497:836): pid=5835 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:25.719987 sshd[5835]: Connection closed by 10.200.16.10 port 50870 Dec 16 12:17:25.720650 sshd-session[5824]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:25.721000 audit[5824]: USER_END pid=5824 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:25.726467 systemd[1]: sshd@14-10.200.20.36:22-10.200.16.10:50870.service: Deactivated successfully. Dec 16 12:17:25.729430 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:17:25.746460 systemd-logind[2038]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:17:25.747980 systemd-logind[2038]: Removed session 18. Dec 16 12:17:25.721000 audit[5824]: CRED_DISP pid=5824 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:25.764642 kernel: audit: type=1106 audit(1765887445.721:837): pid=5824 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:25.764738 kernel: audit: type=1104 audit(1765887445.721:838): pid=5824 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:25.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.36:22-10.200.16.10:50870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:25.792689 containerd[2077]: time="2025-12-16T12:17:25.792495986Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:25.796383 containerd[2077]: time="2025-12-16T12:17:25.796342020Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:17:25.796589 containerd[2077]: time="2025-12-16T12:17:25.796394039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:25.796736 kubelet[3629]: E1216 12:17:25.796693 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:17:25.797031 kubelet[3629]: E1216 12:17:25.796741 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:17:25.797031 kubelet[3629]: E1216 12:17:25.796828 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6574577cd4-cw4js_calico-system(4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:25.797031 kubelet[3629]: E1216 12:17:25.796853 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6574577cd4-cw4js" podUID="4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7" Dec 16 12:17:27.478150 kubelet[3629]: E1216 12:17:27.477951 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-nsvdz" podUID="934998e2-1bd4-4baf-a9e2-cc5a0c414cea" Dec 16 12:17:27.479627 containerd[2077]: time="2025-12-16T12:17:27.479223383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:17:27.751233 containerd[2077]: time="2025-12-16T12:17:27.750861102Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:27.754830 containerd[2077]: time="2025-12-16T12:17:27.754661142Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:17:27.754927 containerd[2077]: time="2025-12-16T12:17:27.754822095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:27.755143 kubelet[3629]: E1216 12:17:27.755095 3629 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:17:27.755220 kubelet[3629]: E1216 12:17:27.755151 3629 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:17:27.755248 kubelet[3629]: E1216 12:17:27.755225 3629 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xnsjr_calico-system(42e08362-84ba-4be1-b9a5-3a3391796c9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:27.755370 kubelet[3629]: E1216 12:17:27.755250 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xnsjr" podUID="42e08362-84ba-4be1-b9a5-3a3391796c9d" Dec 16 12:17:30.811264 systemd[1]: Started sshd@15-10.200.20.36:22-10.200.16.10:45610.service - OpenSSH per-connection server daemon (10.200.16.10:45610). Dec 16 12:17:30.824571 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:17:30.824688 kernel: audit: type=1130 audit(1765887450.810:840): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.36:22-10.200.16.10:45610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:30.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.36:22-10.200.16.10:45610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:31.252000 audit[5851]: USER_ACCT pid=5851 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:31.260780 sshd[5851]: Accepted publickey for core from 10.200.16.10 port 45610 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:31.270000 audit[5851]: CRED_ACQ pid=5851 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:31.272228 sshd-session[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:31.286898 kernel: audit: type=1101 audit(1765887451.252:841): pid=5851 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:31.287004 kernel: audit: type=1103 audit(1765887451.270:842): pid=5851 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:31.297801 kernel: audit: type=1006 audit(1765887451.270:843): pid=5851 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 16 12:17:31.296739 systemd-logind[2038]: New session 19 of user core. Dec 16 12:17:31.301042 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:17:31.270000 audit[5851]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe4531ad0 a2=3 a3=0 items=0 ppid=1 pid=5851 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:31.317901 kernel: audit: type=1300 audit(1765887451.270:843): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe4531ad0 a2=3 a3=0 items=0 ppid=1 pid=5851 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:31.270000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:31.325196 kernel: audit: type=1327 audit(1765887451.270:843): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:31.318000 audit[5851]: USER_START pid=5851 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:31.345302 kernel: audit: type=1105 audit(1765887451.318:844): pid=5851 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:31.325000 audit[5855]: CRED_ACQ pid=5855 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:31.361324 kernel: audit: type=1103 audit(1765887451.325:845): pid=5855 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:31.551329 sshd[5855]: Connection closed by 10.200.16.10 port 45610 Dec 16 12:17:31.553413 sshd-session[5851]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:31.554000 audit[5851]: USER_END pid=5851 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:31.557053 systemd-logind[2038]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:17:31.558889 systemd[1]: sshd@15-10.200.20.36:22-10.200.16.10:45610.service: Deactivated successfully. Dec 16 12:17:31.562375 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:17:31.564542 systemd-logind[2038]: Removed session 19. Dec 16 12:17:31.554000 audit[5851]: CRED_DISP pid=5851 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:31.597861 kernel: audit: type=1106 audit(1765887451.554:846): pid=5851 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:31.597982 kernel: audit: type=1104 audit(1765887451.554:847): pid=5851 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:31.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.36:22-10.200.16.10:45610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:31.651822 systemd[1]: Started sshd@16-10.200.20.36:22-10.200.16.10:45614.service - OpenSSH per-connection server daemon (10.200.16.10:45614). Dec 16 12:17:31.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.36:22-10.200.16.10:45614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:32.087000 audit[5867]: USER_ACCT pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:32.088353 sshd[5867]: Accepted publickey for core from 10.200.16.10 port 45614 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:32.088000 audit[5867]: CRED_ACQ pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:32.088000 audit[5867]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffda686270 a2=3 a3=0 items=0 ppid=1 pid=5867 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:32.088000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:32.089914 sshd-session[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:32.093857 systemd-logind[2038]: New session 20 of user core. Dec 16 12:17:32.096941 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:17:32.101000 audit[5867]: USER_START pid=5867 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:32.103000 audit[5871]: CRED_ACQ pid=5871 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:32.488974 sshd[5871]: Connection closed by 10.200.16.10 port 45614 Dec 16 12:17:32.489572 sshd-session[5867]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:32.490000 audit[5867]: USER_END pid=5867 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:32.490000 audit[5867]: CRED_DISP pid=5867 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:32.496138 systemd[1]: sshd@16-10.200.20.36:22-10.200.16.10:45614.service: Deactivated successfully. Dec 16 12:17:32.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.36:22-10.200.16.10:45614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:32.500304 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:17:32.504316 systemd-logind[2038]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:17:32.506322 systemd-logind[2038]: Removed session 20. Dec 16 12:17:32.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.36:22-10.200.16.10:45626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:32.578045 systemd[1]: Started sshd@17-10.200.20.36:22-10.200.16.10:45626.service - OpenSSH per-connection server daemon (10.200.16.10:45626). Dec 16 12:17:33.012000 audit[5881]: USER_ACCT pid=5881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:33.013145 sshd[5881]: Accepted publickey for core from 10.200.16.10 port 45626 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:33.013000 audit[5881]: CRED_ACQ pid=5881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:33.013000 audit[5881]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe542c450 a2=3 a3=0 items=0 ppid=1 pid=5881 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:33.013000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:33.015955 sshd-session[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:33.022026 systemd-logind[2038]: New session 21 of user core. Dec 16 12:17:33.026910 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:17:33.028000 audit[5881]: USER_START pid=5881 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:33.030000 audit[5885]: CRED_ACQ pid=5885 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:33.252779 update_engine[2039]: I20251216 12:17:33.251786 2039 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 12:17:33.252779 update_engine[2039]: I20251216 12:17:33.251884 2039 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 12:17:33.252779 update_engine[2039]: I20251216 12:17:33.252242 2039 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 12:17:33.263622 update_engine[2039]: E20251216 12:17:33.263494 2039 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 12:17:33.263622 update_engine[2039]: I20251216 12:17:33.263602 2039 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 16 12:17:33.841000 audit[5901]: NETFILTER_CFG table=filter:141 family=2 entries=26 op=nft_register_rule pid=5901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:33.841000 audit[5901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffc93c7200 a2=0 a3=1 items=0 ppid=3775 pid=5901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:33.841000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:33.848000 audit[5901]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=5901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:33.848000 audit[5901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc93c7200 a2=0 a3=1 items=0 ppid=3775 pid=5901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:33.848000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:33.932145 sshd[5885]: Connection closed by 10.200.16.10 port 45626 Dec 16 12:17:33.932439 sshd-session[5881]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:33.933000 audit[5881]: USER_END pid=5881 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:33.934000 audit[5881]: CRED_DISP pid=5881 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:33.937623 systemd[1]: sshd@17-10.200.20.36:22-10.200.16.10:45626.service: Deactivated successfully. Dec 16 12:17:33.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.36:22-10.200.16.10:45626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:33.941601 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:17:33.944439 systemd-logind[2038]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:17:33.945748 systemd-logind[2038]: Removed session 21. Dec 16 12:17:34.017889 systemd[1]: Started sshd@18-10.200.20.36:22-10.200.16.10:45634.service - OpenSSH per-connection server daemon (10.200.16.10:45634). Dec 16 12:17:34.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.36:22-10.200.16.10:45634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:34.426000 audit[5906]: USER_ACCT pid=5906 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:34.427576 sshd[5906]: Accepted publickey for core from 10.200.16.10 port 45634 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:34.427000 audit[5906]: CRED_ACQ pid=5906 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:34.427000 audit[5906]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe6366dd0 a2=3 a3=0 items=0 ppid=1 pid=5906 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:34.427000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:34.429398 sshd-session[5906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:34.433657 systemd-logind[2038]: New session 22 of user core. Dec 16 12:17:34.439025 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:17:34.440000 audit[5906]: USER_START pid=5906 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:34.442000 audit[5910]: CRED_ACQ pid=5910 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:34.481522 kubelet[3629]: E1216 12:17:34.480794 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-glr2q" podUID="a081eac5-c790-4263-a08c-1af1e10fce20" Dec 16 12:17:34.806047 sshd[5910]: Connection closed by 10.200.16.10 port 45634 Dec 16 12:17:34.810058 sshd-session[5906]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:34.811000 audit[5906]: USER_END pid=5906 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:34.811000 audit[5906]: CRED_DISP pid=5906 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:34.815549 systemd[1]: sshd@18-10.200.20.36:22-10.200.16.10:45634.service: Deactivated successfully. Dec 16 12:17:34.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.36:22-10.200.16.10:45634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:34.819949 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:17:34.823197 systemd-logind[2038]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:17:34.826169 systemd-logind[2038]: Removed session 22. Dec 16 12:17:34.876000 audit[5922]: NETFILTER_CFG table=filter:143 family=2 entries=38 op=nft_register_rule pid=5922 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:34.876000 audit[5922]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffeedd72d0 a2=0 a3=1 items=0 ppid=3775 pid=5922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:34.876000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:34.880000 audit[5922]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5922 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:34.880000 audit[5922]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffeedd72d0 a2=0 a3=1 items=0 ppid=3775 pid=5922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:34.880000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:34.897360 systemd[1]: Started sshd@19-10.200.20.36:22-10.200.16.10:45650.service - OpenSSH per-connection server daemon (10.200.16.10:45650). Dec 16 12:17:34.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.36:22-10.200.16.10:45650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:35.329000 audit[5924]: USER_ACCT pid=5924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:35.330331 sshd[5924]: Accepted publickey for core from 10.200.16.10 port 45650 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:35.330000 audit[5924]: CRED_ACQ pid=5924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:35.330000 audit[5924]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc52e00d0 a2=3 a3=0 items=0 ppid=1 pid=5924 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.330000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:35.331818 sshd-session[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:35.336118 systemd-logind[2038]: New session 23 of user core. Dec 16 12:17:35.343977 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:17:35.345000 audit[5924]: USER_START pid=5924 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:35.347000 audit[5928]: CRED_ACQ pid=5928 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:35.480206 kubelet[3629]: E1216 12:17:35.480126 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lft87" podUID="8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85" Dec 16 12:17:35.609850 sshd[5928]: Connection closed by 10.200.16.10 port 45650 Dec 16 12:17:35.610951 sshd-session[5924]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:35.611000 audit[5924]: USER_END pid=5924 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:35.611000 audit[5924]: CRED_DISP pid=5924 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:35.614791 systemd-logind[2038]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:17:35.615479 systemd[1]: sshd@19-10.200.20.36:22-10.200.16.10:45650.service: Deactivated successfully. Dec 16 12:17:35.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.36:22-10.200.16.10:45650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:35.617440 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:17:35.619099 systemd-logind[2038]: Removed session 23. Dec 16 12:17:38.478693 kubelet[3629]: E1216 12:17:38.478554 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-568467779b-jqkm2" podUID="a62f9a6a-f8d3-4852-be55-6d4bed6c90c8" Dec 16 12:17:40.478941 kubelet[3629]: E1216 12:17:40.478896 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6574577cd4-cw4js" podUID="4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7" Dec 16 12:17:40.701766 systemd[1]: Started sshd@20-10.200.20.36:22-10.200.16.10:42002.service - OpenSSH per-connection server daemon (10.200.16.10:42002). Dec 16 12:17:40.706542 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 12:17:40.706651 kernel: audit: type=1130 audit(1765887460.701:889): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.36:22-10.200.16.10:42002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:40.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.36:22-10.200.16.10:42002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:41.146286 sshd[5939]: Accepted publickey for core from 10.200.16.10 port 42002 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:41.145000 audit[5939]: USER_ACCT pid=5939 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:41.166052 sshd-session[5939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:41.164000 audit[5939]: CRED_ACQ pid=5939 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:41.176035 systemd-logind[2038]: New session 24 of user core. Dec 16 12:17:41.186788 kernel: audit: type=1101 audit(1765887461.145:890): pid=5939 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:41.186890 kernel: audit: type=1103 audit(1765887461.164:891): pid=5939 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:41.190901 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 12:17:41.200445 kernel: audit: type=1006 audit(1765887461.164:892): pid=5939 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 12:17:41.164000 audit[5939]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd83f64b0 a2=3 a3=0 items=0 ppid=1 pid=5939 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:41.220782 kernel: audit: type=1300 audit(1765887461.164:892): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd83f64b0 a2=3 a3=0 items=0 ppid=1 pid=5939 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:41.164000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:41.228310 kernel: audit: type=1327 audit(1765887461.164:892): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:41.220000 audit[5939]: USER_START pid=5939 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:41.249196 kernel: audit: type=1105 audit(1765887461.220:893): pid=5939 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:41.228000 audit[5943]: CRED_ACQ pid=5943 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:41.265073 kernel: audit: type=1103 audit(1765887461.228:894): pid=5943 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:41.451413 sshd[5943]: Connection closed by 10.200.16.10 port 42002 Dec 16 12:17:41.450275 sshd-session[5939]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:41.451000 audit[5939]: USER_END pid=5939 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:41.456123 systemd[1]: sshd@20-10.200.20.36:22-10.200.16.10:42002.service: Deactivated successfully. Dec 16 12:17:41.458778 systemd-logind[2038]: Session 24 logged out. Waiting for processes to exit. Dec 16 12:17:41.460390 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 12:17:41.463935 systemd-logind[2038]: Removed session 24. Dec 16 12:17:41.451000 audit[5939]: CRED_DISP pid=5939 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:41.489306 kernel: audit: type=1106 audit(1765887461.451:895): pid=5939 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:41.489412 kernel: audit: type=1104 audit(1765887461.451:896): pid=5939 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:41.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.36:22-10.200.16.10:42002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:42.479067 kubelet[3629]: E1216 12:17:42.479018 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-nsvdz" podUID="934998e2-1bd4-4baf-a9e2-cc5a0c414cea" Dec 16 12:17:42.480066 kubelet[3629]: E1216 12:17:42.479371 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xnsjr" podUID="42e08362-84ba-4be1-b9a5-3a3391796c9d" Dec 16 12:17:43.253125 update_engine[2039]: I20251216 12:17:43.252813 2039 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 12:17:43.253125 update_engine[2039]: I20251216 12:17:43.252911 2039 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 12:17:43.253832 update_engine[2039]: I20251216 12:17:43.253789 2039 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 12:17:43.288480 update_engine[2039]: E20251216 12:17:43.288061 2039 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 12:17:43.288480 update_engine[2039]: I20251216 12:17:43.288194 2039 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 12:17:43.288480 update_engine[2039]: I20251216 12:17:43.288203 2039 omaha_request_action.cc:617] Omaha request response: Dec 16 12:17:43.288480 update_engine[2039]: E20251216 12:17:43.288308 2039 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 16 12:17:43.288480 update_engine[2039]: I20251216 12:17:43.288324 2039 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 16 12:17:43.288480 update_engine[2039]: I20251216 12:17:43.288328 2039 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 12:17:43.288480 update_engine[2039]: I20251216 12:17:43.288331 2039 update_attempter.cc:306] Processing Done. Dec 16 12:17:43.288480 update_engine[2039]: E20251216 12:17:43.288344 2039 update_attempter.cc:619] Update failed. Dec 16 12:17:43.288480 update_engine[2039]: I20251216 12:17:43.288348 2039 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 16 12:17:43.288480 update_engine[2039]: I20251216 12:17:43.288352 2039 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 16 12:17:43.288480 update_engine[2039]: I20251216 12:17:43.288357 2039 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 16 12:17:43.288814 update_engine[2039]: I20251216 12:17:43.288789 2039 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 12:17:43.288832 locksmithd[2114]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 16 12:17:43.289070 update_engine[2039]: I20251216 12:17:43.288822 2039 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 12:17:43.289070 update_engine[2039]: I20251216 12:17:43.288827 2039 omaha_request_action.cc:272] Request: Dec 16 12:17:43.289070 update_engine[2039]: Dec 16 12:17:43.289070 update_engine[2039]: Dec 16 12:17:43.289070 update_engine[2039]: Dec 16 12:17:43.289070 update_engine[2039]: Dec 16 12:17:43.289070 update_engine[2039]: Dec 16 12:17:43.289070 update_engine[2039]: Dec 16 12:17:43.289070 update_engine[2039]: I20251216 12:17:43.288832 2039 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 12:17:43.289070 update_engine[2039]: I20251216 12:17:43.288853 2039 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 12:17:43.289194 update_engine[2039]: I20251216 12:17:43.289171 2039 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 12:17:43.362061 update_engine[2039]: E20251216 12:17:43.361748 2039 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 12:17:43.362061 update_engine[2039]: I20251216 12:17:43.362013 2039 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 12:17:43.362061 update_engine[2039]: I20251216 12:17:43.362058 2039 omaha_request_action.cc:617] Omaha request response: Dec 16 12:17:43.362061 update_engine[2039]: I20251216 12:17:43.362068 2039 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 12:17:43.362061 update_engine[2039]: I20251216 12:17:43.362073 2039 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 12:17:43.362061 update_engine[2039]: I20251216 12:17:43.362076 2039 update_attempter.cc:306] Processing Done. Dec 16 12:17:43.362292 update_engine[2039]: I20251216 12:17:43.362083 2039 update_attempter.cc:310] Error event sent. Dec 16 12:17:43.362292 update_engine[2039]: I20251216 12:17:43.362092 2039 update_check_scheduler.cc:74] Next update check in 42m24s Dec 16 12:17:43.362665 locksmithd[2114]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 16 12:17:46.546014 systemd[1]: Started sshd@21-10.200.20.36:22-10.200.16.10:42008.service - OpenSSH per-connection server daemon (10.200.16.10:42008). Dec 16 12:17:46.551799 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:17:46.551892 kernel: audit: type=1130 audit(1765887466.545:898): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.36:22-10.200.16.10:42008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:46.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.36:22-10.200.16.10:42008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:46.998000 audit[5954]: USER_ACCT pid=5954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:47.001793 sshd[5954]: Accepted publickey for core from 10.200.16.10 port 42008 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:47.019676 sshd-session[5954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:47.018000 audit[5954]: CRED_ACQ pid=5954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:47.037203 kernel: audit: type=1101 audit(1765887466.998:899): pid=5954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:47.037289 kernel: audit: type=1103 audit(1765887467.018:900): pid=5954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:47.047505 kernel: audit: type=1006 audit(1765887467.018:901): pid=5954 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 12:17:47.018000 audit[5954]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff5b76f40 a2=3 a3=0 items=0 ppid=1 pid=5954 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:47.067104 kernel: audit: type=1300 audit(1765887467.018:901): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff5b76f40 a2=3 a3=0 items=0 ppid=1 pid=5954 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:47.018000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:47.071135 systemd-logind[2038]: New session 25 of user core. Dec 16 12:17:47.075316 kernel: audit: type=1327 audit(1765887467.018:901): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:47.078949 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 12:17:47.081000 audit[5954]: USER_START pid=5954 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:47.103000 audit[5958]: CRED_ACQ pid=5958 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:47.119115 kernel: audit: type=1105 audit(1765887467.081:902): pid=5954 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:47.119230 kernel: audit: type=1103 audit(1765887467.103:903): pid=5958 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:47.313788 sshd[5958]: Connection closed by 10.200.16.10 port 42008 Dec 16 12:17:47.314399 sshd-session[5954]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:47.315000 audit[5954]: USER_END pid=5954 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:47.318397 systemd-logind[2038]: Session 25 logged out. Waiting for processes to exit. Dec 16 12:17:47.320416 systemd[1]: sshd@21-10.200.20.36:22-10.200.16.10:42008.service: Deactivated successfully. Dec 16 12:17:47.323078 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 12:17:47.325711 systemd-logind[2038]: Removed session 25. Dec 16 12:17:47.315000 audit[5954]: CRED_DISP pid=5954 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:47.351939 kernel: audit: type=1106 audit(1765887467.315:904): pid=5954 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:47.352046 kernel: audit: type=1104 audit(1765887467.315:905): pid=5954 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:47.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.36:22-10.200.16.10:42008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:47.480656 kubelet[3629]: E1216 12:17:47.480275 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-glr2q" podUID="a081eac5-c790-4263-a08c-1af1e10fce20" Dec 16 12:17:47.481442 kubelet[3629]: E1216 12:17:47.481407 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lft87" podUID="8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85" Dec 16 12:17:50.478300 kubelet[3629]: E1216 12:17:50.478182 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-568467779b-jqkm2" podUID="a62f9a6a-f8d3-4852-be55-6d4bed6c90c8" Dec 16 12:17:51.975000 audit[5973]: NETFILTER_CFG table=filter:145 family=2 entries=26 op=nft_register_rule pid=5973 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:51.979203 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:17:51.979282 kernel: audit: type=1325 audit(1765887471.975:907): table=filter:145 family=2 entries=26 op=nft_register_rule pid=5973 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:51.975000 audit[5973]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd2c894d0 a2=0 a3=1 items=0 ppid=3775 pid=5973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:52.007198 kernel: audit: type=1300 audit(1765887471.975:907): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd2c894d0 a2=0 a3=1 items=0 ppid=3775 pid=5973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:51.975000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:52.016304 kernel: audit: type=1327 audit(1765887471.975:907): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:52.007000 audit[5973]: NETFILTER_CFG table=nat:146 family=2 entries=104 op=nft_register_chain pid=5973 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:52.025950 kernel: audit: type=1325 audit(1765887472.007:908): table=nat:146 family=2 entries=104 op=nft_register_chain pid=5973 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:52.007000 audit[5973]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffd2c894d0 a2=0 a3=1 items=0 ppid=3775 pid=5973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:52.044188 kernel: audit: type=1300 audit(1765887472.007:908): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffd2c894d0 a2=0 a3=1 items=0 ppid=3775 pid=5973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:52.007000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:52.052563 kernel: audit: type=1327 audit(1765887472.007:908): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:52.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.36:22-10.200.16.10:51218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:52.402385 systemd[1]: Started sshd@22-10.200.20.36:22-10.200.16.10:51218.service - OpenSSH per-connection server daemon (10.200.16.10:51218). Dec 16 12:17:52.417849 kernel: audit: type=1130 audit(1765887472.401:909): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.36:22-10.200.16.10:51218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:52.854000 audit[5975]: USER_ACCT pid=5975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:52.872121 sshd[5975]: Accepted publickey for core from 10.200.16.10 port 51218 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:52.872109 sshd-session[5975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:52.870000 audit[5975]: CRED_ACQ pid=5975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:52.888558 kernel: audit: type=1101 audit(1765887472.854:910): pid=5975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:52.888650 kernel: audit: type=1103 audit(1765887472.870:911): pid=5975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:52.899794 kernel: audit: type=1006 audit(1765887472.870:912): pid=5975 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 12:17:52.870000 audit[5975]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdbbe7320 a2=3 a3=0 items=0 ppid=1 pid=5975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:52.870000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:52.904609 systemd-logind[2038]: New session 26 of user core. Dec 16 12:17:52.908909 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 12:17:52.911000 audit[5975]: USER_START pid=5975 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:52.912000 audit[5979]: CRED_ACQ pid=5979 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:53.138136 sshd[5979]: Connection closed by 10.200.16.10 port 51218 Dec 16 12:17:53.137402 sshd-session[5975]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:53.138000 audit[5975]: USER_END pid=5975 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:53.138000 audit[5975]: CRED_DISP pid=5975 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:53.141583 systemd[1]: sshd@22-10.200.20.36:22-10.200.16.10:51218.service: Deactivated successfully. Dec 16 12:17:53.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.36:22-10.200.16.10:51218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:53.143673 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 12:17:53.147065 systemd-logind[2038]: Session 26 logged out. Waiting for processes to exit. Dec 16 12:17:53.147986 systemd-logind[2038]: Removed session 26. Dec 16 12:17:53.478509 kubelet[3629]: E1216 12:17:53.478052 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-nsvdz" podUID="934998e2-1bd4-4baf-a9e2-cc5a0c414cea" Dec 16 12:17:53.479829 kubelet[3629]: E1216 12:17:53.479074 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xnsjr" podUID="42e08362-84ba-4be1-b9a5-3a3391796c9d" Dec 16 12:17:55.478875 kubelet[3629]: E1216 12:17:55.478007 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6574577cd4-cw4js" podUID="4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7" Dec 16 12:17:58.234329 systemd[1]: Started sshd@23-10.200.20.36:22-10.200.16.10:51234.service - OpenSSH per-connection server daemon (10.200.16.10:51234). Dec 16 12:17:58.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.36:22-10.200.16.10:51234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:58.241784 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 12:17:58.241851 kernel: audit: type=1130 audit(1765887478.233:918): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.36:22-10.200.16.10:51234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:58.676000 audit[6015]: USER_ACCT pid=6015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:58.693979 sshd[6015]: Accepted publickey for core from 10.200.16.10 port 51234 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:58.693000 audit[6015]: CRED_ACQ pid=6015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:58.695724 sshd-session[6015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:58.709415 kernel: audit: type=1101 audit(1765887478.676:919): pid=6015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:58.709534 kernel: audit: type=1103 audit(1765887478.693:920): pid=6015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:58.720843 kernel: audit: type=1006 audit(1765887478.693:921): pid=6015 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 16 12:17:58.693000 audit[6015]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe0196270 a2=3 a3=0 items=0 ppid=1 pid=6015 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:58.726027 systemd-logind[2038]: New session 27 of user core. Dec 16 12:17:58.738292 kernel: audit: type=1300 audit(1765887478.693:921): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe0196270 a2=3 a3=0 items=0 ppid=1 pid=6015 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:58.693000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:58.745227 kernel: audit: type=1327 audit(1765887478.693:921): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:58.748296 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 12:17:58.751000 audit[6015]: USER_START pid=6015 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:58.772000 audit[6019]: CRED_ACQ pid=6019 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:58.788602 kernel: audit: type=1105 audit(1765887478.751:922): pid=6015 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:58.788697 kernel: audit: type=1103 audit(1765887478.772:923): pid=6019 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:59.004315 sshd[6019]: Connection closed by 10.200.16.10 port 51234 Dec 16 12:17:59.004552 sshd-session[6015]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:59.004000 audit[6015]: USER_END pid=6015 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:59.013427 systemd-logind[2038]: Session 27 logged out. Waiting for processes to exit. Dec 16 12:17:59.015145 systemd[1]: sshd@23-10.200.20.36:22-10.200.16.10:51234.service: Deactivated successfully. Dec 16 12:17:59.018838 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 12:17:59.021170 systemd-logind[2038]: Removed session 27. Dec 16 12:17:59.004000 audit[6015]: CRED_DISP pid=6015 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:59.042403 kernel: audit: type=1106 audit(1765887479.004:924): pid=6015 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:59.042465 kernel: audit: type=1104 audit(1765887479.004:925): pid=6015 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:59.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.36:22-10.200.16.10:51234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:02.478658 kubelet[3629]: E1216 12:18:02.478399 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-glr2q" podUID="a081eac5-c790-4263-a08c-1af1e10fce20" Dec 16 12:18:02.479692 kubelet[3629]: E1216 12:18:02.479056 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lft87" podUID="8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85" Dec 16 12:18:02.479692 kubelet[3629]: E1216 12:18:02.479118 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-568467779b-jqkm2" podUID="a62f9a6a-f8d3-4852-be55-6d4bed6c90c8" Dec 16 12:18:04.097949 systemd[1]: Started sshd@24-10.200.20.36:22-10.200.16.10:34590.service - OpenSSH per-connection server daemon (10.200.16.10:34590). Dec 16 12:18:04.102015 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:18:04.102106 kernel: audit: type=1130 audit(1765887484.097:927): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.36:22-10.200.16.10:34590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:04.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.36:22-10.200.16.10:34590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:04.477852 kubelet[3629]: E1216 12:18:04.477710 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xnsjr" podUID="42e08362-84ba-4be1-b9a5-3a3391796c9d" Dec 16 12:18:04.535000 audit[6031]: USER_ACCT pid=6031 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:04.553368 sshd[6031]: Accepted publickey for core from 10.200.16.10 port 34590 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:18:04.554368 sshd-session[6031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:04.553000 audit[6031]: CRED_ACQ pid=6031 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:04.559779 kernel: audit: type=1101 audit(1765887484.535:928): pid=6031 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:04.562919 systemd-logind[2038]: New session 28 of user core. Dec 16 12:18:04.586626 kernel: audit: type=1103 audit(1765887484.553:929): pid=6031 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:04.586706 kernel: audit: type=1006 audit(1765887484.553:930): pid=6031 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Dec 16 12:18:04.553000 audit[6031]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe2fdc790 a2=3 a3=0 items=0 ppid=1 pid=6031 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:04.604095 kernel: audit: type=1300 audit(1765887484.553:930): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe2fdc790 a2=3 a3=0 items=0 ppid=1 pid=6031 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:04.590430 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 16 12:18:04.553000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:04.615296 kernel: audit: type=1327 audit(1765887484.553:930): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:04.606000 audit[6031]: USER_START pid=6031 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:04.636265 kernel: audit: type=1105 audit(1765887484.606:931): pid=6031 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:04.615000 audit[6035]: CRED_ACQ pid=6035 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:04.653175 kernel: audit: type=1103 audit(1765887484.615:932): pid=6035 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:04.817710 sshd[6035]: Connection closed by 10.200.16.10 port 34590 Dec 16 12:18:04.816505 sshd-session[6031]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:04.816000 audit[6031]: USER_END pid=6031 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:04.820445 systemd-logind[2038]: Session 28 logged out. Waiting for processes to exit. Dec 16 12:18:04.821036 systemd[1]: sshd@24-10.200.20.36:22-10.200.16.10:34590.service: Deactivated successfully. Dec 16 12:18:04.825410 systemd[1]: session-28.scope: Deactivated successfully. Dec 16 12:18:04.828682 systemd-logind[2038]: Removed session 28. Dec 16 12:18:04.817000 audit[6031]: CRED_DISP pid=6031 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:04.853569 kernel: audit: type=1106 audit(1765887484.816:933): pid=6031 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:04.853694 kernel: audit: type=1104 audit(1765887484.817:934): pid=6031 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:04.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.36:22-10.200.16.10:34590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:08.479069 kubelet[3629]: E1216 12:18:08.479020 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-nsvdz" podUID="934998e2-1bd4-4baf-a9e2-cc5a0c414cea" Dec 16 12:18:09.477695 kubelet[3629]: E1216 12:18:09.477624 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6574577cd4-cw4js" podUID="4517fd6f-78fe-47ae-9d6d-f6ee6d3ebba7" Dec 16 12:18:09.898223 systemd[1]: Started sshd@25-10.200.20.36:22-10.200.16.10:34604.service - OpenSSH per-connection server daemon (10.200.16.10:34604). Dec 16 12:18:09.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.36:22-10.200.16.10:34604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:09.902060 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:18:09.902104 kernel: audit: type=1130 audit(1765887489.897:936): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.36:22-10.200.16.10:34604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:10.311000 audit[6046]: USER_ACCT pid=6046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:10.314995 sshd[6046]: Accepted publickey for core from 10.200.16.10 port 34604 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:18:10.331224 sshd-session[6046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:10.329000 audit[6046]: CRED_ACQ pid=6046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:10.340821 systemd-logind[2038]: New session 29 of user core. Dec 16 12:18:10.347427 kernel: audit: type=1101 audit(1765887490.311:937): pid=6046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:10.347506 kernel: audit: type=1103 audit(1765887490.329:938): pid=6046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:10.357937 kernel: audit: type=1006 audit(1765887490.329:939): pid=6046 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Dec 16 12:18:10.329000 audit[6046]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe8e7bd50 a2=3 a3=0 items=0 ppid=1 pid=6046 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:10.361198 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 16 12:18:10.378951 kernel: audit: type=1300 audit(1765887490.329:939): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe8e7bd50 a2=3 a3=0 items=0 ppid=1 pid=6046 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:10.329000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:10.387907 kernel: audit: type=1327 audit(1765887490.329:939): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:10.366000 audit[6046]: USER_START pid=6046 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:10.407165 kernel: audit: type=1105 audit(1765887490.366:940): pid=6046 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:10.380000 audit[6050]: CRED_ACQ pid=6050 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:10.422635 kernel: audit: type=1103 audit(1765887490.380:941): pid=6050 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:10.600964 sshd[6050]: Connection closed by 10.200.16.10 port 34604 Dec 16 12:18:10.601703 sshd-session[6046]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:10.602000 audit[6046]: USER_END pid=6046 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:10.605958 systemd-logind[2038]: Session 29 logged out. Waiting for processes to exit. Dec 16 12:18:10.606083 systemd[1]: sshd@25-10.200.20.36:22-10.200.16.10:34604.service: Deactivated successfully. Dec 16 12:18:10.609603 systemd[1]: session-29.scope: Deactivated successfully. Dec 16 12:18:10.613841 systemd-logind[2038]: Removed session 29. Dec 16 12:18:10.602000 audit[6046]: CRED_DISP pid=6046 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:10.639064 kernel: audit: type=1106 audit(1765887490.602:942): pid=6046 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:10.639177 kernel: audit: type=1104 audit(1765887490.602:943): pid=6046 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:10.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.36:22-10.200.16.10:34604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:13.480548 kubelet[3629]: E1216 12:18:13.480388 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-568467779b-jqkm2" podUID="a62f9a6a-f8d3-4852-be55-6d4bed6c90c8" Dec 16 12:18:15.478726 kubelet[3629]: E1216 12:18:15.477869 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f8f4c6db-glr2q" podUID="a081eac5-c790-4263-a08c-1af1e10fce20" Dec 16 12:18:15.681393 systemd[1]: Started sshd@26-10.200.20.36:22-10.200.16.10:38512.service - OpenSSH per-connection server daemon (10.200.16.10:38512). Dec 16 12:18:15.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.20.36:22-10.200.16.10:38512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:15.684954 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:18:15.685034 kernel: audit: type=1130 audit(1765887495.681:945): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.20.36:22-10.200.16.10:38512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:16.087000 audit[6064]: USER_ACCT pid=6064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:16.107731 sshd[6064]: Accepted publickey for core from 10.200.16.10 port 38512 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:18:16.108668 sshd-session[6064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:16.107000 audit[6064]: CRED_ACQ pid=6064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:16.127777 kernel: audit: type=1101 audit(1765887496.087:946): pid=6064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:16.127898 kernel: audit: type=1103 audit(1765887496.107:947): pid=6064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:16.127932 kernel: audit: type=1006 audit(1765887496.107:948): pid=6064 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Dec 16 12:18:16.107000 audit[6064]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffe85d340 a2=3 a3=0 items=0 ppid=1 pid=6064 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:16.157937 kernel: audit: type=1300 audit(1765887496.107:948): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffe85d340 a2=3 a3=0 items=0 ppid=1 pid=6064 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:16.107000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:16.169734 kernel: audit: type=1327 audit(1765887496.107:948): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:16.170768 systemd-logind[2038]: New session 30 of user core. Dec 16 12:18:16.185004 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 16 12:18:16.189000 audit[6064]: USER_START pid=6064 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:16.214000 audit[6068]: CRED_ACQ pid=6068 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:16.237697 kernel: audit: type=1105 audit(1765887496.189:949): pid=6064 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:16.237835 kernel: audit: type=1103 audit(1765887496.214:950): pid=6068 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:16.420289 sshd[6068]: Connection closed by 10.200.16.10 port 38512 Dec 16 12:18:16.421110 sshd-session[6064]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:16.421000 audit[6064]: USER_END pid=6064 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:16.444136 systemd[1]: sshd@26-10.200.20.36:22-10.200.16.10:38512.service: Deactivated successfully. Dec 16 12:18:16.445746 systemd[1]: session-30.scope: Deactivated successfully. Dec 16 12:18:16.422000 audit[6064]: CRED_DISP pid=6064 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:16.461478 kernel: audit: type=1106 audit(1765887496.421:951): pid=6064 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:16.461634 kernel: audit: type=1104 audit(1765887496.422:952): pid=6064 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:16.463134 systemd-logind[2038]: Session 30 logged out. Waiting for processes to exit. Dec 16 12:18:16.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.20.36:22-10.200.16.10:38512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:16.465173 systemd-logind[2038]: Removed session 30. Dec 16 12:18:17.480359 kubelet[3629]: E1216 12:18:17.479917 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xnsjr" podUID="42e08362-84ba-4be1-b9a5-3a3391796c9d" Dec 16 12:18:17.481381 kubelet[3629]: E1216 12:18:17.481355 3629 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lft87" podUID="8ad17ca2-8ff1-4ee9-bc62-9c8a663b9e85"